Introduction
This bibliography offers a starting point for exploring GeoAI research, encompassing key publications, textbooks, and online resources. Consider it a living document, constantly evolving as the field progresses.
Core Books
5447768
core book
1
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22T3UU9LDN%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Gao%20et%20al.%22%2C%22parsedDate%22%3A%222023-12-08%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EGao%2C%20S.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.taylorfrancis.com%5C%2Fbooks%5C%2F9781003308423%27%3EHandbook%20of%20Geospatial%20Artificial%20Intelligence%3C%5C%2Fa%3E.%202023%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22book%22%2C%22title%22%3A%22Handbook%20of%20Geospatial%20Artificial%20Intelligence%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Song%22%2C%22lastName%22%3A%22Gao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yingjie%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenwen%22%2C%22lastName%22%3A%22Li%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222023-12-8%22%2C%22language%22%3A%22en%22%2C%22ISBN%22%3A%229781003308423%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.taylorfrancis.com%5C%2Fbooks%5C%2F9781003308423%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-15T11%3A17%3A56Z%22%7D%7D%2C%7B%22key%22%3A%22TMJXPNIC%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Chiang%20et%20al.%22%2C%22parsedDate%22%3A%222020%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EChiang%2C%20Y.-Y.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-319-66908-3_4%27%3ETraining%20Deep%20Learning%20Models%20for%20Geographic%20Feature%20Recognition%20from%20Historical%20Maps%3C%5C%2Fa%3E.%202020%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22bookSection%22%2C%22title%22%3A%22Training%20Deep%20Learning%20Models%20for%20Geographic%20Feature%20Recognition%20from%20Historical%20Maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yao-Yi%22%2C%22lastName%22%3A%22Chiang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weiwei%22%2C%22lastName%22%3A%22Duan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stefan%22%2C%22lastName%22%3A%22Leyk%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Johannes%20H.%22%2C%22lastName%22%3A%22Uhl%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Craig%20A.%22%2C%22lastName%22%3A%22Knoblock%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Yao-Yi%22%2C%22lastName%22%3A%22Chiang%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Weiwei%22%2C%22lastName%22%3A%22Duan%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Stefan%22%2C%22lastName%22%3A%22Leyk%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Johannes%20H.%22%2C%22lastName%22%3A%22Uhl%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Craig%20A.%22%2C%22lastName%22%3A%22Knoblock%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20map%20scans%20contain%20valuable%20information%20%28e.g.%2C%20historical%20locations%20of%20roads%2C%20buildings%29%20enabling%20the%20analyses%20that%20require%20long-term%20historical%20data%20of%20the%20natural%20and%20built%20environment.%20Many%20online%20archives%20now%20provide%20public%20access%20to%20a%20large%20number%20of%20historical%20map%20scans%2C%20such%20as%20the%20historical%20USGS%20%28United%20States%20Geological%20Survey%29%20topographic%20archive%20and%20the%20historical%20Ordnance%20Survey%20maps%20in%20the%20United%20Kingdom.%20Efficiently%20extracting%20information%20from%20these%20map%20scans%20remains%20a%20challenging%20task%2C%20which%20is%20typically%20achieved%20by%20manually%20digitizing%20the%20map%20content.%20In%20computer%20vision%2C%20the%20process%20of%20detecting%20and%20extracting%20the%20precise%20locations%20of%20objects%20from%20images%20is%20called%20semantic%20segmentation.%20Semantic%20segmentation%20processes%20take%20an%20image%20as%20input%20and%20classify%20each%20pixel%20of%20the%20image%20to%20an%20object%20class%20of%20interest.%20Machine%20learning%20models%20for%20semantic%20segmentation%20have%20been%20progressing%20rapidly%20with%20the%20emergence%20of%20Deep%20Convolutional%20Neural%20Networks%20%28DCNNs%20or%20CNNs%29.%20A%20key%20factor%20for%20the%20success%20of%20CNNs%20is%20the%20wide%20availability%20of%20large%20amounts%20of%20%28labeled%29%20training%20data%2C%20but%20these%20training%20data%20are%20mostly%20for%20daily%20images%20not%20for%20historical%20%28or%20any%29%20maps.%20Today%2C%20generating%20training%20data%20needs%20a%20significant%20amount%20of%20manual%20labor%20that%20is%20often%20impractical%20for%20the%20application%20of%20historical%20map%20processing.%20One%20solution%20to%20the%20problem%20of%20training%20data%20scarcity%20is%20by%20transferring%20knowledge%20learned%20from%20a%20domain%20with%20a%20sufficient%20amount%20of%20labeled%20data%20to%20another%20domain%20lacking%20labeled%20data%20%28i.e.%2C%20transfer%20learning%29.%20This%20chapter%20presents%20an%20overview%20of%20deep-learning%20semantic%20segmentation%20models%20and%20discusses%20their%20strengths%20and%20weaknesses%20concerning%20geographic%20feature%20recognition%20from%20historical%20map%20scans.%20The%20chapter%20also%20examines%20a%20number%20of%20transfer%20learning%20strategies%20that%20can%20reuse%20the%20state-of-the-art%20CNN%20models%20trained%20from%20the%20publicly%20available%20training%20datasets%20for%20the%20task%20of%20recognizing%20geographic%20features%20from%20historical%20maps.%20Finally%2C%20this%20chapter%20presents%20a%20comprehensive%20experiment%20for%20extracting%20railroad%20features%20from%20USGS%20historical%20topographic%20maps%20as%20a%20case%20study.%22%2C%22bookTitle%22%3A%22Using%20Historical%20Maps%20in%20Scientific%20Studies%3A%20Applications%2C%20Challenges%2C%20and%20Best%20Practices%22%2C%22date%22%3A%222020%22%2C%22language%22%3A%22en%22%2C%22ISBN%22%3A%22978-3-319-66908-3%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-319-66908-3_4%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A46%3A54Z%22%7D%7D%5D%7D
Gao, S. et al. Handbook of Geospatial Artificial Intelligence. 2023
Chiang, Y.-Y. et al. Training Deep Learning Models for Geographic Feature Recognition from Historical Maps. 2020
Core Articles
5447768
core article
1
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22WB5RWJ9Z%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Hu%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-29%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EHu%2C%20Y.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F19475683.2024.2309866%27%3EA%20five-year%20milestone%3A%20reflections%20on%20advances%20and%20limitations%20in%20GeoAI%20research%3C%5C%2Fa%3E.%202024%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20five-year%20milestone%3A%20reflections%20on%20advances%20and%20limitations%20in%20GeoAI%20research%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yingjie%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22Goodchild%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22A-Xing%22%2C%22lastName%22%3A%22Zhu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22May%22%2C%22lastName%22%3A%22Yuan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Orhun%22%2C%22lastName%22%3A%22Aydin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Budhendra%22%2C%22lastName%22%3A%22Bhaduri%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Song%22%2C%22lastName%22%3A%22Gao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenwen%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dalton%22%2C%22lastName%22%3A%22Lunga%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shawn%22%2C%22lastName%22%3A%22Newsam%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222024-01-29%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1080%5C%2F19475683.2024.2309866%22%2C%22ISSN%22%3A%221947-5683%2C%201947-5691%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F19475683.2024.2309866%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-15T11%3A26%3A17Z%22%7D%7D%2C%7B%22key%22%3A%226A6ZJC4D%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Kang%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-16%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EKang%2C%20Y.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F15230406.2023.2295943%27%3EArtificial%20intelligence%20studies%20in%20cartography%3A%20a%20review%20and%20synthesis%20of%20methods%2C%20applications%2C%20and%20ethics%3C%5C%2Fa%3E.%202024%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Artificial%20intelligence%20studies%20in%20cartography%3A%20a%20review%20and%20synthesis%20of%20methods%2C%20applications%2C%20and%20ethics%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yuhao%22%2C%22lastName%22%3A%22Kang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Song%22%2C%22lastName%22%3A%22Gao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robert%20E.%22%2C%22lastName%22%3A%22Roth%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222024-01-16%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2023.2295943%22%2C%22ISSN%22%3A%221523-0406%2C%201545-0465%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F15230406.2023.2295943%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-11-16T17%3A51%3A08Z%22%7D%7D%2C%7B%22key%22%3A%227388BIV6%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Harrie%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-02%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EHarrie%2C%20L.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F15230406.2023.2295948%27%3EMachine%20learning%20in%20cartography%3C%5C%2Fa%3E.%202024%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Machine%20learning%20in%20cartography%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lars%22%2C%22lastName%22%3A%22Harrie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guillaume%22%2C%22lastName%22%3A%22Touya%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rachid%22%2C%22lastName%22%3A%22Oucheikh%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tinghua%22%2C%22lastName%22%3A%22Ai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Azelle%22%2C%22lastName%22%3A%22Courtial%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kai-Florian%22%2C%22lastName%22%3A%22Richter%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222024-01-02%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2023.2295948%22%2C%22ISSN%22%3A%221523-0406%2C%201545-0465%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F15230406.2023.2295948%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-18T12%3A48%3A09Z%22%7D%7D%2C%7B%22key%22%3A%22N4UP28KW%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Robinson%20et%20al.%22%2C%22parsedDate%22%3A%222023-11-13%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ERobinson%2C%20A.C.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3615886.3627734%27%3ECartography%20in%20GeoAI%3A%20Emerging%20Themes%20and%20Research%20Challenges%3C%5C%2Fa%3E.%202023%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Cartography%20in%20GeoAI%3A%20Emerging%20Themes%20and%20Research%20Challenges%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Anthony%20C.%22%2C%22lastName%22%3A%22Robinson%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Arzu%22%2C%22lastName%22%3A%22%5Cu00c7%5Cu00f6ltekin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Amy%20L.%22%2C%22lastName%22%3A%22Griffin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Florian%22%2C%22lastName%22%3A%22Ledermann%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222023-11-13%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%206th%20ACM%20SIGSPATIAL%20International%20Workshop%20on%20AI%20for%20Geographic%20Knowledge%20Discovery%22%2C%22conferenceName%22%3A%22SIGSPATIAL%20%2723%3A%20The%2031st%20ACM%20International%20Conference%20on%20Advances%20in%20Geographic%20Information%20Systems%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1145%5C%2F3615886.3627734%22%2C%22ISBN%22%3A%229798400703485%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3615886.3627734%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-15T11%3A19%3A11Z%22%7D%7D%2C%7B%22key%22%3A%22GMND4EA5%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Chen%20et%20al.%22%2C%22parsedDate%22%3A%222023%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EChen%2C%20M.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Flinkinghub.elsevier.com%5C%2Fretrieve%5C%2Fpii%5C%2FS0012825223001277%27%3EArtificial%20intelligence%20and%20visual%20analytics%20in%20geographical%20space%20and%20cyberspace%3A%20Research%20opportunities%20and%20challenges%3C%5C%2Fa%3E.%202023%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Artificial%20intelligence%20and%20visual%20analytics%20in%20geographical%20space%20and%20cyberspace%3A%20Research%20opportunities%20and%20challenges%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Christophe%22%2C%22lastName%22%3A%22Claramunt%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Arzu%22%2C%22lastName%22%3A%22%5Cu00c7%5Cu00f6ltekin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xintao%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peng%22%2C%22lastName%22%3A%22Peng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Anthony%20C.%22%2C%22lastName%22%3A%22Robinson%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dajiang%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Josef%22%2C%22lastName%22%3A%22Strobl%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22John%20P.%22%2C%22lastName%22%3A%22Wilson%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22Batty%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mei-Po%22%2C%22lastName%22%3A%22Kwan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Maryam%22%2C%22lastName%22%3A%22Lotfian%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fran%5Cu00e7ois%22%2C%22lastName%22%3A%22Golay%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22St%5Cu00e9phane%22%2C%22lastName%22%3A%22Joost%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jens%22%2C%22lastName%22%3A%22Ingensand%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ahmad%20M.%22%2C%22lastName%22%3A%22Senousi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tao%22%2C%22lastName%22%3A%22Cheng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Temenoujka%22%2C%22lastName%22%3A%22Bandrova%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Milan%22%2C%22lastName%22%3A%22Konecny%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paul%20M.%22%2C%22lastName%22%3A%22Torrens%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Alexander%22%2C%22lastName%22%3A%22Klippel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Songnian%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fengyuan%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Li%22%2C%22lastName%22%3A%22He%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jinfeng%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Carlo%22%2C%22lastName%22%3A%22Ratti%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Olaf%22%2C%22lastName%22%3A%22Kolditz%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hui%22%2C%22lastName%22%3A%22Lin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guonian%22%2C%22lastName%22%3A%22L%5Cu00fc%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%2206%5C%2F2023%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.earscirev.2023.104438%22%2C%22ISSN%22%3A%2200128252%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Flinkinghub.elsevier.com%5C%2Fretrieve%5C%2Fpii%5C%2FS0012825223001277%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-15T11%3A17%3A37Z%22%7D%7D%2C%7B%22key%22%3A%2283CP8G74%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Elizar%20et%20al.%22%2C%22parsedDate%22%3A%222022-09-28%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EElizar%2C%20E.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F1424-8220%5C%2F22%5C%2F19%5C%2F7384%27%3EA%20Review%20on%20Multiscale-Deep-Learning%20Applications%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20Review%20on%20Multiscale-Deep-Learning%20Applications%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Elizar%22%2C%22lastName%22%3A%22Elizar%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mohd%20Asyraf%22%2C%22lastName%22%3A%22Zulkifley%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rusdha%22%2C%22lastName%22%3A%22Muharar%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mohd%20Hairi%20Mohd%22%2C%22lastName%22%3A%22Zaman%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Seri%20Mastura%22%2C%22lastName%22%3A%22Mustaza%22%7D%5D%2C%22abstractNote%22%3A%22In%20general%2C%20most%20of%20the%20existing%20convolutional%20neural%20network%20%28CNN%29-based%20deep-learning%20models%20suffer%20from%20spatial-information%20loss%20and%20inadequate%20feature-representation%20issues.%20This%20is%20due%20to%20their%20inability%20to%20capture%20multiscale-context%20information%20and%20the%20exclusion%20of%20semantic%20information%20throughout%20the%20pooling%20operations.%20In%20the%20early%20layers%20of%20a%20CNN%2C%20the%20network%20encodes%20simple%20semantic%20representations%2C%20such%20as%20edges%20and%20corners%2C%20while%2C%20in%20the%20latter%20part%20of%20the%20CNN%2C%20the%20network%20encodes%20more%20complex%20semantic%20features%2C%20such%20as%20complex%20geometric%20shapes.%20Theoretically%2C%20it%20is%20better%20for%20a%20CNN%20to%20extract%20features%20from%20different%20levels%20of%20semantic%20representation%20because%20tasks%20such%20as%20classification%20and%20segmentation%20work%20better%20when%20both%20simple%20and%20complex%20feature%20maps%20are%20utilized.%20Hence%2C%20it%20is%20also%20crucial%20to%20embed%20multiscale%20capability%20throughout%20the%20network%20so%20that%20the%20various%20scales%20of%20the%20features%20can%20be%20optimally%20captured%20to%20represent%20the%20intended%20task.%20Multiscale%20representation%20enables%20the%20network%20to%20fuse%20low-level%20and%20high-level%20features%20from%20a%20restricted%20receptive%20field%20to%20enhance%20the%20deep-model%20performance.%20The%20main%20novelty%20of%20this%20review%20is%20the%20comprehensive%20novel%20taxonomy%20of%20multiscale-deep-learning%20methods%2C%20which%20includes%20details%20of%20several%20architectures%20and%20their%20strengths%20that%20have%20been%20implemented%20in%20the%20existing%20works.%20Predominantly%2C%20multiscale%20approaches%20in%20deep-learning%20networks%20can%20be%20classed%20into%20two%20categories%3A%20multiscale%20feature%20learning%20and%20multiscale%20feature%20fusion.%20Multiscale%20feature%20learning%20refers%20to%20the%20method%20of%20deriving%20feature%20maps%20by%20examining%20kernels%20over%20several%20sizes%20to%20collect%20a%20larger%20range%20of%20relevant%20features%20and%20predict%20the%20input%20images%5Cu2019%20spatial%20mapping.%20Multiscale%20feature%20fusion%20uses%20features%20with%20different%20resolutions%20to%20find%20patterns%20over%20short%20and%20long%20distances%2C%20without%20a%20deep%20network.%20Additionally%2C%20several%20examples%20of%20the%20techniques%20are%20also%20discussed%20according%20to%20their%20applications%20in%20satellite%20imagery%2C%20medical%20imaging%2C%20agriculture%2C%20and%20industrial%20and%20manufacturing%20systems.%22%2C%22date%22%3A%222022-09-28%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fs22197384%22%2C%22ISSN%22%3A%221424-8220%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F1424-8220%5C%2F22%5C%2F19%5C%2F7384%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-15T11%3A21%3A06Z%22%7D%7D%2C%7B%22key%22%3A%22D8Y2L67J%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20and%20Hsu%22%2C%22parsedDate%22%3A%222022-07%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ELi%2C%20W.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F7%5C%2F385%27%3EGeoAI%20for%20Large-Scale%20Image%20Analysis%20and%20Machine%20Vision%3A%20Recent%20Progress%20of%20Artificial%20Intelligence%20in%20Geography%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22GeoAI%20for%20Large-Scale%20Image%20Analysis%20and%20Machine%20Vision%3A%20Recent%20Progress%20of%20Artificial%20Intelligence%20in%20Geography%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenwen%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chia-Yu%22%2C%22lastName%22%3A%22Hsu%22%7D%5D%2C%22abstractNote%22%3A%22GeoAI%2C%20or%20geospatial%20artificial%20intelligence%2C%20has%20become%20a%20trending%20topic%20and%20the%20frontier%20for%20spatial%20analytics%20in%20Geography.%20Although%20much%20progress%20has%20been%20made%20in%20exploring%20the%20integration%20of%20AI%20and%20Geography%2C%20there%20is%20yet%20no%20clear%20definition%20of%20GeoAI%2C%20its%20scope%20of%20research%2C%20or%20a%20broad%20discussion%20of%20how%20it%20enables%20new%20ways%20of%20problem%20solving%20across%20social%20and%20environmental%20sciences.%20This%20paper%20provides%20a%20comprehensive%20overview%20of%20GeoAI%20research%20used%20in%20large-scale%20image%20analysis%2C%20and%20its%20methodological%20foundation%2C%20most%20recent%20progress%20in%20geospatial%20applications%2C%20and%20comparative%20advantages%20over%20traditional%20methods.%20We%20organize%20this%20review%20of%20GeoAI%20research%20according%20to%20different%20kinds%20of%20image%20or%20structured%20data%2C%20including%20satellite%20and%20drone%20images%2C%20street%20views%2C%20and%20geo-scientific%20data%2C%20as%20well%20as%20their%20applications%20in%20a%20variety%20of%20image%20analysis%20and%20machine%20vision%20tasks.%20While%20different%20applications%20tend%20to%20use%20diverse%20types%20of%20data%20and%20models%2C%20we%20summarized%20six%20major%20strengths%20of%20GeoAI%20research%2C%20including%20%281%29%20enablement%20of%20large-scale%20analytics%3B%20%282%29%20automation%3B%20%283%29%20high%20accuracy%3B%20%284%29%20sensitivity%20in%20detecting%20subtle%20changes%3B%20%285%29%20tolerance%20of%20noise%20in%20data%3B%20and%20%286%29%20rapid%20technological%20advancement.%20As%20GeoAI%20remains%20a%20rapidly%20evolving%20field%2C%20we%20also%20describe%20current%20knowledge%20gaps%20and%20discuss%20future%20research%20directions.%22%2C%22date%22%3A%222022%5C%2F7%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi11070385%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F7%5C%2F385%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-20T17%3A11%3A02Z%22%7D%7D%2C%7B%22key%22%3A%22G655RGJ4%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Usery%20et%20al.%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EUsery%2C%20E.L.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2F10.1111%5C%2Ftgis.12830%27%3EGeoAI%20in%20the%20US%20Geological%20Survey%20for%20topographic%20mapping%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22GeoAI%20in%20the%20US%20Geological%20Survey%20for%20topographic%20mapping%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22E.%20Lynn%22%2C%22lastName%22%3A%22Usery%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Samantha%20T.%22%2C%22lastName%22%3A%22Arundel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ethan%22%2C%22lastName%22%3A%22Shavers%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lawrence%22%2C%22lastName%22%3A%22Stanislawski%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Philip%22%2C%22lastName%22%3A%22Thiem%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dalia%22%2C%22lastName%22%3A%22Varanka%22%7D%5D%2C%22abstractNote%22%3A%22Abstract%20%5Cn%20%20%20%20%20%20%20%20%20%20%20%20Geospatial%20artificial%20intelligence%20%28GeoAI%29%20can%20be%20defined%20broadly%20as%20the%20application%20of%20artificial%20intelligence%20methods%20and%20techniques%20to%20geospatial%20data%2C%20processes%2C%20models%2C%20and%20applications.%20The%20application%20of%20these%20methods%20to%20topographic%20data%20and%20phenomena%20is%20a%20focus%20of%20research%20in%20the%20US%20Geological%20Survey%20%28USGS%29.%20Specifically%2C%20the%20USGS%20has%20researched%20and%20developed%20applications%20in%20terrain%20feature%20extraction%2C%20hydrographic%20network%20extraction%2C%20and%20semantic%20modeling.%20This%20article%20is%20a%20documentation%20of%20the%20recent%20work%20and%20current%20state%20of%20research%20and%20development.%20The%20article%20helps%20define%20the%20accomplishments%20and%20directions%20of%20research%20and%20applications%20in%20fields%20of%20GeoAI%20for%20topographic%20mapping%20within%20the%20USGS%20and%20more%20broadly.%22%2C%22date%22%3A%2202%5C%2F2022%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1111%5C%2Ftgis.12830%22%2C%22ISSN%22%3A%221361-1682%2C%201467-9671%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2F10.1111%5C%2Ftgis.12830%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-11-16T17%3A24%3A22Z%22%7D%7D%2C%7B%22key%22%3A%22KXQK33QT%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%22%2C%22parsedDate%22%3A%222020-06-30%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ELi%2C%20W.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fjosis.org%5C%2Findex.php%5C%2Fjosis%5C%2Farticle%5C%2Fview%5C%2F116%27%3EGeoAI%3A%20Where%20machine%20learning%20and%20big%20data%20converge%20in%20GIScience%3C%5C%2Fa%3E.%202020%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22GeoAI%3A%20Where%20machine%20learning%20and%20big%20data%20converge%20in%20GIScience%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenwen%22%2C%22lastName%22%3A%22Li%22%7D%5D%2C%22abstractNote%22%3A%22In%20this%20paper%20GeoAI%20is%20introduced%20as%20an%20emergent%20spatial%20analytical%20framework%20for%20data-intensive%20GIScience.%20As%20the%20new%20fuel%20of%20geospatial%20research%2C%20GeoAI%20leverages%20recent%20breakthroughs%20in%20machine%20learning%20and%20advanced%20computing%20to%20achieve%20scalable%20processing%20and%20intelligent%20analysis%20of%20geospatial%20big%20data.%20The%20three-pillar%20view%20of%20GeoAI%2C%20its%20two%20methodological%20threads%20%28data-driven%20and%20knowledge-driven%29%2C%20as%20well%20as%20their%20geospatial%20applications%20are%20highlighted.%20The%20paper%20concludes%20with%20discussion%20of%20remaining%20challenges%20and%20future%20research%20directions%20of%20GeoAI.%22%2C%22date%22%3A%222020-06-30%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISSN%22%3A%221948-660X%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fjosis.org%5C%2Findex.php%5C%2Fjosis%5C%2Farticle%5C%2Fview%5C%2F116%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-11-16T17%3A19%3A06Z%22%7D%7D%2C%7B%22key%22%3A%223BT5LBAP%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Janowicz%20et%20al.%22%2C%22parsedDate%22%3A%222020-04-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EJanowicz%2C%20K.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F13658816.2019.1684500%27%3EGeoAI%3A%20spatially%20explicit%20artificial%20intelligence%20techniques%20for%20geographic%20knowledge%20discovery%20and%20beyond%3C%5C%2Fa%3E.%202020%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22GeoAI%3A%20spatially%20explicit%20artificial%20intelligence%20techniques%20for%20geographic%20knowledge%20discovery%20and%20beyond%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Krzysztof%22%2C%22lastName%22%3A%22Janowicz%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Song%22%2C%22lastName%22%3A%22Gao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Grant%22%2C%22lastName%22%3A%22McKenzie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yingjie%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Budhendra%22%2C%22lastName%22%3A%22Bhaduri%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222020-04-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2019.1684500%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F13658816.2019.1684500%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-11-16T17%3A14%3A33Z%22%7D%7D%2C%7B%22key%22%3A%22N4ES9CR9%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Hu%20et%20al.%22%2C%22parsedDate%22%3A%222019%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EHu%2C%20Y.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3377000.3377002%27%3EGeoAI%20at%20ACM%20SIGSPATIAL%3A%20progress%2C%20challenges%2C%20and%20future%20directions%3C%5C%2Fa%3E.%202019%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22GeoAI%20at%20ACM%20SIGSPATIAL%3A%20progress%2C%20challenges%2C%20and%20future%20directions%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yingjie%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Song%22%2C%22lastName%22%3A%22Gao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dalton%22%2C%22lastName%22%3A%22Lunga%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenwen%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shawn%22%2C%22lastName%22%3A%22Newsam%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Budhendra%22%2C%22lastName%22%3A%22Bhaduri%22%7D%5D%2C%22abstractNote%22%3A%22Geospatial%20artificial%20intelligence%20%28GeoAI%29%20is%20an%20interdisciplinary%20field%20that%20has%20received%20tremendous%20attention%20from%20both%20academia%20and%20industry%20in%20recent%20years.%20This%20article%20reviews%20the%20series%20of%20GeoAI%20workshops%20held%20at%20the%20Association%20for%20Computing%20Machinery%20%28ACM%29%20International%20Conference%20on%20Advances%20in%20Geographic%20Information%20Systems%20%28SIGSPATIAL%29%20since%202017.%20These%20workshops%20have%20provided%20researchers%20a%20forum%20to%20present%20GeoAI%20advances%20covering%20a%20wide%20range%20of%20topics%2C%20such%20as%20geospatial%20image%20processing%2C%20transportation%20modeling%2C%20public%20health%2C%20and%20digital%20humanities.%20We%20provide%20a%20summary%20of%20these%20topics%20and%20the%20research%20articles%20presented%20at%20the%202017%2C%202018%2C%20and%202019%20GeoAI%20workshops.%20We%20conclude%20with%20a%20list%20of%20open%20research%20directions%20for%20this%20rapidly%20advancing%20field.%22%2C%22date%22%3A%22Dezember%2017%2C%202019%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3377000.3377002%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3377000.3377002%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-20T17%3A17%3A25Z%22%7D%7D%5D%7D
Hu, Y. et al. A five-year milestone: reflections on advances and limitations in GeoAI research. 2024
Harrie, L. et al. Machine learning in cartography. 2024
Robinson, A.C. et al. Cartography in GeoAI: Emerging Themes and Research Challenges. 2023
Elizar, E. et al. A Review on Multiscale-Deep-Learning Applications. 2022
Usery, E.L. et al. GeoAI in the US Geological Survey for topographic mapping. 2022
Hu, Y. et al. GeoAI at ACM SIGSPATIAL: progress, challenges, and future directions. 2019
Map Distinction
5447768
map distinction
1
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22VFGFFKZM%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ELi%2C%20J.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27http%3A%5C%2F%5C%2Frave.ohiolink.edu%5C%2Fetdc%5C%2Fview%3Facc_num%3Dosu1650493323790506%27%3EComputational%20Cartographic%20Recognition%3A%20Exploring%20the%20Use%20of%20Machine%20Learning%20and%20Other%20Computational%20Approaches%20to%20Map%20Reading%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22thesis%22%2C%22title%22%3A%22Computational%20Cartographic%20Recognition%3A%20Exploring%20the%20Use%20of%20Machine%20Learning%20and%20Other%20Computational%20Approaches%20to%20Map%20Reading%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jialin%22%2C%22lastName%22%3A%22Li%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22thesisType%22%3A%22Dissertation%22%2C%22university%22%3A%22The%20Ohio%20State%20University%22%2C%22date%22%3A%222022%22%2C%22language%22%3A%22en%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Frave.ohiolink.edu%5C%2Fetdc%5C%2Fview%3Facc_num%3Dosu1650493323790506%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A59%3A01Z%22%7D%7D%2C%7B%22key%22%3A%22XJM44F3C%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Schn%5Cu00fcrer%20et%20al.%22%2C%22parsedDate%22%3A%222021-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ESchn%5Cu00fcrer%2C%20R.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F00087041.2020.1738112%27%3EDetection%20of%20Pictorial%20Map%20Objects%20with%20Convolutional%20Neural%20Networks%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Detection%20of%20Pictorial%20Map%20Objects%20with%20Convolutional%20Neural%20Networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Raimund%22%2C%22lastName%22%3A%22Schn%5Cu00fcrer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ren%5Cu00e9%22%2C%22lastName%22%3A%22Sieber%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jost%22%2C%22lastName%22%3A%22Schmid-Lanter%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22A.%20Cengiz%22%2C%22lastName%22%3A%22%5Cu00d6ztireli%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22In%20this%20work%2C%20realistically%20drawn%20objects%20are%20identified%20on%20digital%20maps%20by%20convolutional%20neural%20networks.%20For%20the%20first%20two%20experiments%2C%206200%20images%20were%20retrieved%20from%20Pinterest.%20While%20alternating%20image%20input%20options%2C%20two%20binary%20classifiers%20based%20on%20Xception%20and%20InceptionResNetV2%20were%20trained%20to%20separate%20maps%20and%20pictorial%20maps.%20Results%20showed%20that%20the%20accuracy%20is%2095%5Cu201397%25%20to%20distinguish%20maps%20from%20other%20images%2C%20whereas%20maps%20with%20pictorial%20objects%20are%20correctly%20classified%20at%20rates%20of%2087%5Cu201392%25.%20For%20a%20third%20experiment%2C%20bounding%20boxes%20of%203200%20sailing%20ships%20were%20annotated%20in%20historic%20maps%20from%20different%20digital%20libraries.%20Faster%20R-CNN%20and%20RetinaNet%20were%20compared%20to%20determine%20the%20box%20coordinates%2C%20while%20adjusting%20anchor%20scales%20and%20examining%20configurations%20for%20small%20objects.%20A%20resulting%20average%20precision%20of%2032%25%20was%20obtained%20for%20Faster%20R-CNN%20and%20of%2036%25%20for%20RetinaNet.%20Research%20outcomes%20are%20relevant%20for%20trawling%20map%20images%20on%20the%20Internet%20and%20for%20enhancing%20the%20advanced%20search%20of%20digital%20map%20catalogues.%22%2C%22date%22%3A%222021-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F00087041.2020.1738112%22%2C%22ISSN%22%3A%220008-7041%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F00087041.2020.1738112%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A38%3A03Z%22%7D%7D%5D%7D
Schnürer, R. et al. Detection of Pictorial Map Objects with Convolutional Neural Networks. 2021
Map Localisation
5447768
map localisation
1
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22FYQ2BURU%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Oh%22%2C%22parsedDate%22%3A%222020-12-31%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EOh%2C%20B.-W.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27http%3A%5C%2F%5C%2Fwww.dbpia.co.kr%5C%2FJournal%5C%2FArticleDetail%5C%2FNODE10510391%27%3EMap%20Detection%20using%20Deep%20Learning%3C%5C%2Fa%3E.%202020%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Map%20Detection%20using%20Deep%20Learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Byoung-Woo%22%2C%22lastName%22%3A%22Oh%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222020-12-31%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.14801%5C%2FJAITC.2020.10.2.61%22%2C%22ISSN%22%3A%222234-1072%2C%202234-0963%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fwww.dbpia.co.kr%5C%2FJournal%5C%2FArticleDetail%5C%2FNODE10510391%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-15T20%3A16%3A52Z%22%7D%7D%5D%7D
Oh, B.-W. Map Detection using Deep Learning. 2020
Feature Extraction (Points)
5447768
feature extraction, points
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22HZXXHKN6%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Vass%5Cu00e1nyi%20and%20Gede%22%2C%22parsedDate%22%3A%222021-12-03%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EVass%5Cu00e1nyi%2C%20G.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.proc-int-cartogr-assoc.net%5C%2F4%5C%2F109%5C%2F2021%5C%2F%27%3EAutomatic%20vectorization%20of%20point%20symbols%20on%20archive%20maps%20using%20deep%20convolutional%20neural%20network%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Automatic%20vectorization%20of%20point%20symbols%20on%20archive%20maps%20using%20deep%20convolutional%20neural%20network%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gergely%22%2C%22lastName%22%3A%22Vass%5Cu00e1nyi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22M%5Cu00e1ty%5Cu00e1s%22%2C%22lastName%22%3A%22Gede%22%7D%5D%2C%22abstractNote%22%3A%22%3Cp%3E%3Cstrong%20class%3D%5C%22journal-contentHeaderColor%5C%22%3EAbstract.%3C%5C%2Fstrong%3E%20Archive%20topographical%20maps%20are%20a%20key%20source%20of%20geographical%20information%20from%20past%20ages%2C%20which%20can%20be%20valuable%20for%20several%20science%20fields.%20Since%20manual%20digitization%20is%20usually%20slow%20and%20takes%20much%20human%20resource%2C%20automatic%20methods%20are%20preferred%2C%20such%20as%20deep%20learning%20algorithms.%20Although%20automatic%20vectorization%20is%20a%20common%20problem%2C%20there%20have%20been%20few%20approaches%20regarding%20point%20symbols.%20In%20this%20paper%2C%20a%20point%20symbol%20vectorization%20method%20is%20proposed%2C%20which%20was%20tested%20on%20Third%20Military%20Survey%20map%20sheets%20using%20a%20Mask%20Regional%20Convolutional%20Neural%20Network%20%28MRCNN%29.%20The%20MRCNN%20implementation%20uses%20the%20ResNet101%20network%20improved%20with%20the%20Feature%20Pyramid%20Network%20architecture%20and%20is%20developed%20in%20a%20Google%20Colab%20environment.%20The%20pretrained%20network%20was%20trained%20on%20four%20point%20symbol%20categories%20simultaneously.%20Results%20show%2090%25%20accuracy%2C%20while%2094%25%20of%20symbols%20detected%20for%20some%20categories%20on%20the%20complete%20test%20sheet.%3C%5C%2Fp%3E%22%2C%22date%22%3A%222021%5C%2F12%5C%2F03%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.5194%5C%2Fica-proc-4-109-2021%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.proc-int-cartogr-assoc.net%5C%2F4%5C%2F109%5C%2F2021%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A45%3A23Z%22%7D%7D%2C%7B%22key%22%3A%22UZH2NK7Y%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Guo%20et%20al.%22%2C%22parsedDate%22%3A%222021-12-01%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EGuo%2C%20M.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0098300421002302%27%3EDeep%20learning%20framework%20for%20geological%20symbol%20detection%20on%20geological%20maps%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep%20learning%20framework%20for%20geological%20symbol%20detection%20on%20geological%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22MingQiang%22%2C%22lastName%22%3A%22Guo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weijia%22%2C%22lastName%22%3A%22Bei%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ying%22%2C%22lastName%22%3A%22Huang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhanlong%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiaozhen%22%2C%22lastName%22%3A%22Zhao%22%7D%5D%2C%22abstractNote%22%3A%22Dynamic%20legend%20generation%20for%20geological%20maps%20aims%20to%20detect%20and%20identify%20geological%20map%20symbols%20within%20the%20current%20viewshed%20and%20generate%20a%20corresponding%20real-time%20legend%20to%20help%20users%20quickly%20obtain%20the%20name%20and%20meaning%20of%20symbols.%20Detection%20and%20recognition%20entail%20high%20complexity%20and%20uncertainty%20because%20of%20the%20diversity%20of%20symbol%20types%20and%20the%20randomness%20of%20symbol%20distribution%2C%20and%20thus%20the%20generation%20of%20dynamic%20legends%20for%20geological%20maps%20is%20challenging.%20A%20new%20framework%20based%20on%20deep%20learning%20is%20proposed%20in%20this%20study%2C%20combining%20the%20deep%20convolutional%20neural%20network%20%28CNN%29%20and%20graph%20convolutional%20network%20%28GCN%29%20to%20realize%20the%20extraction%20and%20recognition%20of%20geological%20map%20symbols.%20Within%20the%20framework%2C%20a%20CNN-based%20model%20called%20single%20symbol%20detection%20network%20%28SSDN%29%20is%20developed%20to%20detect%20and%20identify%20single%20geological%20map%20symbols%2C%20and%20a%20novel%20GCN%20combined%20with%20L2%20distance%20attention%20%28DAGCN%29%20is%20proposed%20to%20deal%20with%20the%20difficulty%20of%20extracting%20compound%20symbols%20caused%20by%20the%20randomness%20of%20symbol%20distribution.%20This%20work%20systematically%20solves%20the%20problem%20of%20geological%20symbol%20detection%20with%20the%20aid%20of%20object%20detection%20technology%20based%20on%20deep%20learning%2C%20providing%20foundation%20for%20the%20dynamic%20legend%20generation.%20Experiments%20show%20that%20the%20framework%20of%20the%20proposed%20method%20is%20effective%2C%20and%20a%20new%20benchmark%20is%20established%20for%20geological%20symbol%20detection%20on%20geological%20maps.%20All%20of%20our%20data%20and%20code%20are%20publicly%20available.%22%2C%22date%22%3A%222021-12-01%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.cageo.2021.104943%22%2C%22ISSN%22%3A%220098-3004%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0098300421002302%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A58%3A22Z%22%7D%7D%2C%7B%22key%22%3A%22DI2KTT9L%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Kong%20et%20al.%22%2C%22parsedDate%22%3A%222021-08-12%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EKong%2C%20Y.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.hindawi.com%5C%2Fjournals%5C%2Fcomplexity%5C%2F2021%5C%2F8235108%5C%2F%27%3EA%20Mountain%20Summit%20Recognition%20Method%20Based%20on%20Improved%20Faster%20R-CNN%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20Mountain%20Summit%20Recognition%20Method%20Based%20on%20Improved%20Faster%20R-CNN%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yueping%22%2C%22lastName%22%3A%22Kong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yun%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Song%22%2C%22lastName%22%3A%22Guo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiajing%22%2C%22lastName%22%3A%22Wang%22%7D%5D%2C%22abstractNote%22%3A%22Mountain%20summits%20are%20vital%20topographic%20feature%20points%2C%20which%20are%20essential%20for%20understanding%20landform%20processes%20and%20their%20impacts%20on%20the%20environment%20and%20ecosystem.%20Traditional%20summit%20detection%20methods%20operate%20on%20handcrafted%20features%20extracted%20from%20digital%20elevation%20model%20%28DEM%29%20data%20and%20apply%20parametric%20detection%20algorithms%20to%20locate%20mountain%20summits.%20However%2C%20these%20methods%20may%20no%20longer%20be%20effective%20to%20achieve%20desirable%20recognition%20results%20in%20small%20summits%20and%20suffer%20from%20the%20objective%20criterion%20lacking%20problem.%20Thus%2C%20to%20address%20these%20problems%2C%20we%20propose%20an%20improved%20Faster%20region-convolutional%20neural%20network%20%28R-CNN%29%20to%20accurately%20detect%20the%20mountain%20summits%20from%20DEM%20data.%20Based%20on%20Faster%20R-CNN%2C%20the%20improved%20network%20adopts%20a%20residual%20convolution%20block%20to%20replace%20the%20traditional%20part%20and%20adds%20a%20feature%20pyramid%20network%20%28FPN%29%20to%20fuse%20the%20features%20with%20adjacent%20layers%20to%20better%20address%20the%20mountain%20summit%20detection%20task.%20The%20residual%20convolution%20is%20employed%20to%20capture%20the%20deep%20correlation%20between%20visual%20and%20physical%20morphological%20features.%20The%20FPN%20is%20utilized%20to%20integrate%20the%20location%20and%20semantic%20information%20in%20the%20extracted%20feature%20maps%20to%20effectively%20represent%20the%20mountain%20summit%20area.%20The%20experimental%20results%20demonstrate%20that%20the%20proposed%20network%20could%20achieve%20the%20highest%20recall%20and%20precision%20without%20manually%20designed%20summit%20features%20and%20accurately%20identify%20small%20summits.%22%2C%22date%22%3A%222021%5C%2F8%5C%2F12%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1155%5C%2F2021%5C%2F8235108%22%2C%22ISSN%22%3A%221076-2787%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.hindawi.com%5C%2Fjournals%5C%2Fcomplexity%5C%2F2021%5C%2F8235108%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A04%3A27Z%22%7D%7D%2C%7B%22key%22%3A%22949T2BWM%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Saeedimoghaddam%20and%20Stepinski%22%2C%22parsedDate%22%3A%222020-05-03%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ESaeedimoghaddam%2C%20M.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2019.1696968%27%3EAutomatic%20extraction%20of%20road%20intersection%20points%20from%20USGS%20historical%20map%20series%20using%20deep%20convolutional%20neural%20networks%3C%5C%2Fa%3E.%202020%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Automatic%20extraction%20of%20road%20intersection%20points%20from%20USGS%20historical%20map%20series%20using%20deep%20convolutional%20neural%20networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mahmoud%22%2C%22lastName%22%3A%22Saeedimoghaddam%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22T.%20F.%22%2C%22lastName%22%3A%22Stepinski%22%7D%5D%2C%22abstractNote%22%3A%22Road%20intersection%20data%20have%20been%20used%20across%20a%20range%20of%20geospatial%20analyses.%20However%2C%20many%20datasets%20dating%20from%20before%20the%20advent%20of%20GIS%20are%20only%20available%20as%20historical%20printed%20maps.%20To%20be%20analyzed%20by%20GIS%20software%2C%20they%20need%20to%20be%20scanned%20and%20transformed%20into%20a%20usable%20%28vector-based%29%20format.%20Because%20the%20number%20of%20scanned%20historical%20maps%20is%20voluminous%2C%20automated%20methods%20of%20digitization%20and%20transformation%20are%20needed.%20Frequently%2C%20these%20processes%20are%20based%20on%20computer%20vision%20algorithms.%20However%2C%20the%20key%20challenges%20to%20this%20are%20%281%29%20the%20low%20conversion%20accuracy%20for%20low%20quality%20and%20visually%20complex%20maps%2C%20and%20%282%29%20the%20selection%20of%20optimal%20parameters.%20In%20this%20paper%2C%20we%20used%20a%20region-based%20deep%20convolutional%20neural%20network-based%20framework%20%28RCNN%29%20for%20object%20detection%2C%20in%20order%20to%20automatically%20identify%20road%20intersections%20in%20historical%20maps%20of%20several%20cities%20in%20the%20United%20States%20of%20America.%20We%20found%20that%20the%20RCNN%20approach%20is%20more%20accurate%20than%20traditional%20computer%20vision%20algorithms%20for%20double-line%20cartographic%20representation%20of%20the%20roads%2C%20though%20its%20accuracy%20does%20not%20surpass%20all%20traditional%20methods%20used%20for%20single-line%20symbols.%20The%20results%20suggest%20that%20the%20number%20of%20errors%20in%20the%20outputs%20is%20sensitive%20to%20complexity%20and%20blurriness%20of%20the%20maps%2C%20and%20to%20the%20number%20of%20distinct%20red-green-blue%20%28RGB%29%20combinations%20within%20them.%22%2C%22date%22%3A%222020-05-03%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2019.1696968%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2019.1696968%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A06%3A10Z%22%7D%7D%2C%7B%22key%22%3A%22AKXRJH5Q%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Torres%20et%20al.%22%2C%22parsedDate%22%3A%222018-09%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ETorres%2C%20R.N.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F8527481%27%3EA%20Deep%20Learning%20Model%20for%20Identifying%20Mountain%20Summits%20in%20Digital%20Elevation%20Model%20Data%3C%5C%2Fa%3E.%202018%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22A%20Deep%20Learning%20Model%20for%20Identifying%20Mountain%20Summits%20in%20Digital%20Elevation%20Model%20Data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rocio%20Nahime%22%2C%22lastName%22%3A%22Torres%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Piero%22%2C%22lastName%22%3A%22Fraternali%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Federico%22%2C%22lastName%22%3A%22Milani%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Darian%22%2C%22lastName%22%3A%22Frajberg%22%7D%5D%2C%22abstractNote%22%3A%22Analyzing%20Digital%20Elevation%20Model%20%28DEM%29%20data%20to%20identify%20and%20classify%20landforms%20is%20an%20important%20task%2C%20which%20can%20contribute%20to%20improve%20the%20availability%20and%20quality%20of%20public%20open%20source%20cartography%20and%20to%20develop%20novel%20applications%20for%20tourism%20and%20environment%20monitoring.%20In%20the%20literature%2C%20several%20heuristic%20algorithms%20are%20documented%20for%20identifying%20the%20features%20of%20mountain%20regions%2C%20most%20notably%20the%20coordinate%20of%20summits.%20All%20these%20algorithms%20depend%20on%20parameters%2C%20which%20are%20manually%20set.%20In%20this%20paper%2C%20we%20explore%20the%20use%20of%20Deep%20Learning%20methods%20to%20train%20a%20model%20capable%20of%20identifying%20mountain%20summits%2C%20which%20learns%20from%20a%20gold%20standard%20dataset%20containing%20the%20coordinates%20of%20peaks%20in%20a%20region.%20The%20model%20has%20been%20trained%20and%20tested%20with%20Switzerland%20DEM%20and%20peak%20data.%22%2C%22date%22%3A%222018-09%22%2C%22proceedingsTitle%22%3A%222018%20IEEE%20First%20International%20Conference%20on%20Artificial%20Intelligence%20and%20Knowledge%20Engineering%20%28AIKE%29%22%2C%22conferenceName%22%3A%222018%20IEEE%20First%20International%20Conference%20on%20Artificial%20Intelligence%20and%20Knowledge%20Engineering%20%28AIKE%29%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FAIKE.2018.00049%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F8527481%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A14%3A28Z%22%7D%7D%5D%7D
Vassányi, G. et al. Automatic vectorization of point symbols on archive maps using deep convolutional neural network. 2021
Guo, M. et al. Deep learning framework for geological symbol detection on geological maps. 2021
Kong, Y. et al. A Mountain Summit Recognition Method Based on Improved Faster R-CNN. 2021
Saeedimoghaddam, M. et al. Automatic extraction of road intersection points from USGS historical map series using deep convolutional neural networks. 2020
Torres, R.N. et al. A Deep Learning Model for Identifying Mountain Summits in Digital Elevation Model Data. 2018
Feature Extraction (Lines)
5447768
feature extraction, lines
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22KGSAP32J%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhao%20et%20al.%22%2C%22parsedDate%22%3A%222024-07-16%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EZhao%2C%20Y.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs41651-024-00187-z%27%3EAU3-GAN%3A%20A%20Method%20for%20Extracting%20Roads%20from%20Historical%20Maps%20Based%20on%20an%20Attention%20Generative%20Adversarial%20Network%3C%5C%2Fa%3E.%202024%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22AU3-GAN%3A%20A%20Method%20for%20Extracting%20Roads%20from%20Historical%20Maps%20Based%20on%20an%20Attention%20Generative%20Adversarial%20Network%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yao%22%2C%22lastName%22%3A%22Zhao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guangxia%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jian%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tingting%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ziwei%22%2C%22lastName%22%3A%22Li%22%7D%5D%2C%22abstractNote%22%3A%22In%20recent%20years%2C%20the%20integration%20of%20deep%20learning%20technology%20based%20on%20convolutional%20neural%20networks%20with%20historical%20maps%20has%20made%20it%20possible%20to%20automatically%20extract%20roads%20from%20these%20maps%2C%20which%20is%20highly%20important%20for%20studying%20the%20evolution%20of%20transportation%20networks.%20However%2C%20the%20similarity%20between%20roads%20and%20other%20features%20%28such%20as%20contours%2C%20water%20systems%2C%20and%20administrative%20boundaries%29%20poses%20a%20significant%20challenge%20to%20the%20feature%20extraction%20capabilities%20of%20convolutional%20neural%20networks%20%28CNN%29.%20Additionally%2C%20CNN%20require%20a%20large%20quantity%20of%20labelled%20data%20for%20training%2C%20which%20can%20be%20a%20complex%20issue%20for%20historical%20maps.%20To%20address%20these%20limitations%2C%20we%20propose%20a%20method%20for%20extracting%20roads%20from%20historical%20maps%20based%20on%20an%20attention%20generative%20adversarial%20network.%20This%20approach%20leverages%20the%20unique%20architecture%20and%20training%20methodology%20of%20the%20generative%20adversarial%20network%20to%20augment%20datasets%20by%20generating%20data%20that%20closely%20resembles%20real%20samples.%20Meanwhile%2C%20we%20introduce%20an%20attention%20mechanism%20to%20enhance%20UNet3%5Cu2009%2B%5Cu2009and%20achieve%20accurate%20historical%20map%20road%20segmentation%20images.%20We%20validate%20our%20method%20using%20the%20Third%20Military%20Mapping%20Survey%20of%20Austria-Hungary%20and%20compare%20it%20with%20a%20typical%20U-shaped%20network.%20The%20experimental%20results%20show%20that%20our%20proposed%20method%20outperforms%20the%20direct%20use%20of%20the%20U-shaped%20network%2C%20achieving%20at%20least%20an%2018.26%25%20increase%20in%20F1%20and%20a%207.62%25%20increase%20in%20the%20MIoU%2C%20demonstrating%20its%20strong%20ability%20to%20extract%20roads%20from%20historical%20maps%20and%20provide%20a%20valuable%20reference%20for%20road%20extraction%20from%20other%20types%20of%20historical%20maps.%22%2C%22date%22%3A%222024-07-16%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2Fs41651-024-00187-z%22%2C%22ISSN%22%3A%222509-8829%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs41651-024-00187-z%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-07-17T13%3A04%3A45Z%22%7D%7D%2C%7B%22key%22%3A%22THS4BEJA%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Jiao%20et%20al.%22%2C%22parsedDate%22%3A%222024%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EJiao%2C%20C.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Flinkinghub.elsevier.com%5C%2Fretrieve%5C%2Fpii%5C%2FS0198971523001230%27%3EA%20novel%20framework%20for%20road%20vectorization%20and%20classification%20from%20historical%20maps%20based%20on%20deep%20learning%20and%20symbol%20painting%3C%5C%2Fa%3E.%202024%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20novel%20framework%20for%20road%20vectorization%20and%20classification%20from%20historical%20maps%20based%20on%20deep%20learning%20and%20symbol%20painting%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chenjing%22%2C%22lastName%22%3A%22Jiao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Magnus%22%2C%22lastName%22%3A%22Heitzler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%2203%5C%2F2024%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.compenvurbsys.2023.102060%22%2C%22ISSN%22%3A%2201989715%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Flinkinghub.elsevier.com%5C%2Fretrieve%5C%2Fpii%5C%2FS0198971523001230%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A55%3A16Z%22%7D%7D%2C%7B%22key%22%3A%22ZZUAX9AQ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Xia%20et%20al.%22%2C%22parsedDate%22%3A%222023-11-20%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EXia%2C%20X.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3615886.3627738%27%3EContrastive%20Pretraining%20for%20Railway%20Detection%3A%20Unveiling%20Historical%20Maps%20with%20Transformers%3C%5C%2Fa%3E.%202023%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Contrastive%20Pretraining%20for%20Railway%20Detection%3A%20Unveiling%20Historical%20Maps%20with%20Transformers%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xue%22%2C%22lastName%22%3A%22Xia%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chenjing%22%2C%22lastName%22%3A%22Jiao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22Detecting%20railways%20from%20historical%20maps%20is%20challenging%20due%20to%20their%20infrequent%20representation%20in%20a%20map%20sheet%20and%20their%20visual%20similarity%20with%20roads.%20Basically%2C%20both%20railways%20and%20roads%20are%20symbolised%20as%20two%20parallel%20black%20lines%2C%20with%20slight%20differences%20only%20in%20line%20thickness.%20Recent%20advancements%20in%20transformer%20models%20for%20computer%20vision%20tasks%20have%20sparked%20interest%20in%20utilizing%20them%20for%20processing%20historical%20maps.%20However%2C%20the%20success%20of%20transformers%20heavily%20relies%20on%20large-scale%20labelled%20datasets%2C%20predominantly%20available%20for%20ground%20imagery%20rather%20than%20historical%20maps.%20To%20overcome%20these%20challenges%2C%20we%20exploit%20the%20unique%20spatial%20characteristics%20of%20historical%20map%20data%2C%20where%20the%20same%20location%20can%20be%20depicted%20over%20different%20time%20spans%20across%20different%20map%20series.%20For%20example%2C%20each%20location%20in%20Switzerland%20is%20depicted%20in%20both%20the%20Siegfried%20map%20and%20the%20Old%20National%20map%2C%20each%20exhibiting%20distinct%20symbols%20and%20drawing%20styles.%20In%20this%20work%2C%20we%20address%20the%20scarcity%20of%20labelled%20data%20by%20generating%20positive%20pairs%20of%20the%20same%20scene%20from%20different%20map%20series%20and%20employ%20self-supervised%20contrastive%20learning%20to%20pre-train%20a%20dedicated%20transformer%20encoder%20optimized%20for%20map%20data.%20Subsequently%2C%20we%20finetune%20the%20entire%20transformer%20network%20for%20the%20downstream%20railway%20detection%20task.%20Experimental%20results%20demonstrate%20that%20our%20method%20achieves%20comparable%20performance%20to%20fully%20supervised%20approaches%2C%20while%20significantly%20reducing%20the%20amount%20of%20required%20labelled%20dataset%20to%20a%20mere%202.5%25%20after%20contrastive%20pretraining.%22%2C%22date%22%3A%22November%2020%2C%202023%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%206th%20ACM%20SIGSPATIAL%20International%20Workshop%20on%20AI%20for%20Geographic%20Knowledge%20Discovery%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3615886.3627738%22%2C%22ISBN%22%3A%229798400703485%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3615886.3627738%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-04-26T09%3A53%3A41Z%22%7D%7D%2C%7B%22key%22%3A%22GWRSNVJT%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wu%20et%20al.%22%2C%22parsedDate%22%3A%222022-12-31%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EWu%2C%20S.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15481603.2021.2023840%27%3ELeveraging%20uncertainty%20estimation%20and%20spatial%20pyramid%20pooling%20for%20extracting%20hydrological%20features%20from%20scanned%20historical%20topographic%20maps%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Leveraging%20uncertainty%20estimation%20and%20spatial%20pyramid%20pooling%20for%20extracting%20hydrological%20features%20from%20scanned%20historical%20topographic%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sidi%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Magnus%22%2C%22lastName%22%3A%22Heitzler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20maps%20are%20almost%20the%20exclusive%20source%20to%20trace%20back%20the%20characteristics%20of%20earth%20before%20modern%20earth%20observation%20techniques%20came%20into%20being.%20Processing%20historical%20maps%20is%20challenging%20due%20to%20the%20factors%20such%20as%20diverse%20designs%20and%20scales%2C%20or%20inherent%20noise%20from%20painting%2C%20aging%2C%20and%20scanning.%20Our%20paper%20is%20the%20first%20to%20leverage%20uncertainty%20estimation%20under%20the%20framework%20of%20Bayesian%20deep%20learning%20to%20model%20noise%20inherent%20in%20maps%20for%20semantic%20segmentation%20of%20hydrological%20features%20from%20scanned%20topographic%20historical%20maps.%20To%20distinguish%20different%20features%20with%20similar%20symbolization%2C%20we%20integrate%20atrous%20spatial%20pyramid%20pooling%20%28ASPP%29%20to%20incorporate%20multi-scale%20contextual%20information.%20In%20total%2C%20our%20algorithm%20yields%20predictions%20with%20an%20average%20dice%20coefficient%20of%200.827%2C%20improving%20the%20performance%20of%20a%20simple%20U-Net%20by%2026%25.%20Our%20algorithm%20outputs%20intuitively%20interpretable%20pixel-wise%20uncertainty%20maps%20that%20capture%20uncertainty%20in%20object%20boundaries%2C%20noise%20from%20drawing%2C%20aging%2C%20and%20scanning%2C%20as%20well%20as%20out-of-distribution%20designs.%20We%20can%20use%20the%20predicted%20uncertainty%20potentially%20to%20refine%20segmentation%20results%2C%20locate%20rare%20designs%2C%20and%20select%20reliable%20features%20for%20future%20GIS%20analyses.%22%2C%22date%22%3A%222022-12-31%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15481603.2021.2023840%22%2C%22ISSN%22%3A%221548-1603%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15481603.2021.2023840%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A09%3A08Z%22%7D%7D%2C%7B%22key%22%3A%223KZNCJW5%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Jiao%20et%20al.%22%2C%22parsedDate%22%3A%222022-09-01%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EJiao%2C%20C.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1569843222001716%27%3EA%20fast%20and%20effective%20deep%20learning%20approach%20for%20road%20extraction%20from%20historical%20maps%20by%20automatically%20generating%20training%20data%20with%20symbol%20reconstruction%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20fast%20and%20effective%20deep%20learning%20approach%20for%20road%20extraction%20from%20historical%20maps%20by%20automatically%20generating%20training%20data%20with%20symbol%20reconstruction%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chenjing%22%2C%22lastName%22%3A%22Jiao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Magnus%22%2C%22lastName%22%3A%22Heitzler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20road%20data%20are%20often%20needed%20for%20different%20purposes%2C%20such%20as%20tracking%20the%20evolution%20of%20road%20networks%2C%20spatial%20data%20integration%2C%20and%20urban%20sprawl%20investigation.%20However%2C%20road%20extraction%20from%20historical%20maps%20is%20challenging%20due%20to%20their%20dissatisfying%20quality%2C%20the%20difficulty%20in%20distinguishing%20road%20symbols%20from%20those%20of%20other%20features%20%28e.g.%2C%20isolines%2C%20streams%29%2C%20etc.%20Recently%2C%20although%20deep%20learning%2C%20especially%20deep%20convolutional%20neural%20networks%20%28CNNs%29%2C%20have%20been%20successfully%20applied%20to%20extract%20roads%20from%20remote%20sensing%20images%2C%20road%20extraction%20from%20historical%20maps%20with%20deep%20learning%20is%20rarely%20seen%20in%20existing%20studies.%20Apart%20from%20this%2C%20it%20is%20time-consuming%20and%20laborious%20to%20manually%20label%20large%20amounts%20of%20training%20data.%20To%20bridge%20these%20gaps%2C%20this%20paper%20proposes%20a%20novel%20and%20efficient%20methodology%20to%20automatically%20generate%20training%20data%20through%20symbol%20reconstruction%20for%20road%20extraction.%20The%20proposed%20methodology%20is%20validated%20by%20implementing%20and%20comparing%20four%20training%20scenarios%20using%20the%20Swiss%20Siegfried%20map.%20The%20experiments%20show%20that%20imitation%20maps%20generated%20by%20symbol%20reconstruction%20are%20especially%20useful%20in%20two%20cases.%20First%2C%20if%20little%20manually%20labelled%20training%20data%20are%20available%2C%20models%20trained%20on%20imitation%20maps%20alone%20can%20already%20provide%20satisfactory%20road%20extraction%20results.%20Second%2C%20when%20training%20data%20from%20imitation%20maps%20are%20mixed%20with%20real%20training%20data%2C%20the%20resulting%20models%20even%20outperform%20the%20models%20trained%20on%20real%20data%20alone%20for%20some%20metrics%2C%20thus%20indicating%20that%20imitation%20maps%20can%20be%20a%20highly%20valuable%20addition.%20This%20research%20provides%20a%20new%20insight%20for%20fast%20and%20effective%20road%20extraction%20from%20historical%20maps%20using%20deep%20learning.%22%2C%22date%22%3A%222022-09-01%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.jag.2022.102980%22%2C%22ISSN%22%3A%221569-8432%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1569843222001716%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A04%3A07Z%22%7D%7D%2C%7B%22key%22%3A%222NU5SN32%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ran%20et%20al.%22%2C%22parsedDate%22%3A%222022-08%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ERan%2C%20W.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F8%5C%2F439%27%3ERaster%20Map%20Line%20Element%20Extraction%20Method%20Based%20on%20Improved%20U-Net%20Network%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Raster%20Map%20Line%20Element%20Extraction%20Method%20Based%20on%20Improved%20U-Net%20Network%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenjing%22%2C%22lastName%22%3A%22Ran%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiasheng%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kun%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ling%22%2C%22lastName%22%3A%22Bai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xun%22%2C%22lastName%22%3A%22Rao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhe%22%2C%22lastName%22%3A%22Zhao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chunxiao%22%2C%22lastName%22%3A%22Xu%22%7D%5D%2C%22abstractNote%22%3A%22To%20address%20the%20problem%20of%20low%20accuracy%20in%20line%20element%20recognition%20of%20raster%20maps%20due%20to%20text%20and%20background%20interference%2C%20we%20propose%20a%20raster%20map%20line%20element%20recognition%20method%20based%20on%20an%20improved%20U-Net%20network%20model%2C%20combining%20the%20semantic%20segmentation%20algorithm%20of%20deep%20learning%2C%20the%20attention%20gates%20%28AG%29%20module%2C%20and%20the%20atrous%20spatial%20pyramid%20pooling%20%28ASPP%29%20module.%20In%20the%20proposed%20network%20model%2C%20the%20encoder%20extracts%20image%20features%2C%20the%20decoder%20restores%20the%20extracted%20features%2C%20the%20features%20of%20different%20scales%20are%20extracted%20in%20the%20dilated%20convolution%20module%20between%20the%20encoder%20and%20the%20decoder%2C%20and%20the%20attention%20mechanism%20module%20increases%20the%20weight%20of%20line%20elements.%20The%20comparison%20experiment%20was%20carried%20out%20through%20the%20constructed%20line%20element%20recognition%20dataset.%20The%20experimental%20results%20show%20that%20the%20improved%20U-Net%20network%20accuracy%20rate%20is%2093.08%25%2C%20the%20recall%20rate%20is%2092.29%25%2C%20the%20DSC%20accuracy%20is%2093.03%25%2C%20and%20the%20F1-score%20is%2092.68%25.%20In%20the%20network%20robustness%20test%2C%20under%20different%20signal-to-noise%20ratios%20%28SNRs%29%2C%20comparing%20the%20improved%20network%20structure%20with%20the%20original%20network%20structure%2C%20the%20DSC%20improved%20by%2013.18%5Cu201317.05%25.%20These%20results%20show%20that%20the%20network%20model%20proposed%20in%20this%20paper%20can%20effectively%20extract%20raster%20map%20line%20elements.%22%2C%22date%22%3A%222022%5C%2F8%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi11080439%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F8%5C%2F439%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A39%3A18Z%22%7D%7D%2C%7B%22key%22%3A%222XYU3YZ5%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Jiao%20et%20al.%22%2C%22parsedDate%22%3A%222022-05-17%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EJiao%2C%20C.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fisprs-annals.copernicus.org%5C%2Farticles%5C%2FV-2-2022%5C%2F423%5C%2F2022%5C%2F%27%3EA%20Novel%20Data%20Augmentation%20Method%20to%20Enhance%20the%20Training%20Dataset%20for%20Road%20Extraction%20from%20Swiss%20Historical%20Maps%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20Novel%20Data%20Augmentation%20Method%20to%20Enhance%20the%20Training%20Dataset%20for%20Road%20Extraction%20from%20Swiss%20Historical%20Maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22C.%22%2C%22lastName%22%3A%22Jiao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22M.%22%2C%22lastName%22%3A%22Heitzler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22L.%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22Abstract.%20Long-term%20retrospective%20road%20data%20are%20required%20for%20various%20analyses%20%28e.g.%2C%20investigation%20of%20urban%20sprawl%2C%20analysis%20of%20road%20network%20evolution%29.%20Yet%2C%20it%20is%20challenging%20to%20extract%20roads%20from%20scanned%20historical%20maps%20due%20to%20their%20dissatisfying%20quality.%20Although%20deep%20learning%20has%20been%20exerting%20its%20superiority%20in%20image%20segmentation%2C%20its%20application%20to%20road%20extraction%20from%20historical%20maps%20is%20rarely%20seen%20in%20existing%20studies.%20Deep%20learning%20usually%20requires%20quite%20large%20amounts%20of%20training%20data%2C%20which%20is%20time-consuming%20and%20tedious%20to%20label.%20Data%20augmentation%20can%20to%20some%20extent%20solve%20this%20issue.%20The%20existing%20data%20augmentation%20techniques%20vary%20each%20training%20sample%20as%20a%20whole%20%28e.g.%2C%20rotation%2C%20flipping%29.%20But%20some%20features%20or%20symbols%20on%20maps%20will%20never%20occur%20in%20practice%20when%20they%20are%20rotated%20or%20flipped%20%28e.g.%2C%20numbers%2C%20labels%29.%20To%20solve%20this%20problem%20and%20to%20further%20improve%20the%20diversity%20of%20training%20samples%2C%20we%20propose%20a%20novel%20data%20augmentation%20method%2C%20which%20varies%20the%20target%20features%20instead%20of%20the%20whole%20training%20sample.%20The%20method%20is%20validated%20by%20applying%20it%20to%20road%20extraction%20from%20the%20historical%20Swiss%20Siegfried%20map.%20The%20experiment%20results%20show%20the%20effectiveness%20of%20the%20proposed%20method.%22%2C%22date%22%3A%222022-05-17%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.5194%5C%2Fisprs-annals-V-2-2022-423-2022%22%2C%22ISSN%22%3A%222194-9050%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fisprs-annals.copernicus.org%5C%2Farticles%5C%2FV-2-2022%5C%2F423%5C%2F2022%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A46%3A20Z%22%7D%7D%2C%7B%22key%22%3A%22YASSTN3E%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Avc%5Cu0131%20et%20al.%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EAvc%5Cu0131%2C%20C.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9882054%27%3EDeep%20Learning-Based%20Road%20Extraction%20From%20Historical%20Maps%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep%20Learning-Based%20Road%20Extraction%20From%20Historical%20Maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cengiz%22%2C%22lastName%22%3A%22Avc%5Cu0131%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Elif%22%2C%22lastName%22%3A%22Sertel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mustafa%20Erdem%22%2C%22lastName%22%3A%22Kabaday%5Cu0131%22%7D%5D%2C%22abstractNote%22%3A%22Automatic%20road%20extraction%20from%20historical%20maps%20is%20an%20important%20task%20to%20understand%20past%20transportation%20conditions%20and%20conduct%20spatiotemporal%20analysis%20revealing%20information%20about%20historical%20events%20and%20human%20activities%20over%20the%20years.%20This%20research%20aimed%20to%20propose%20the%20ideal%20architecture%2C%20encoder%2C%20and%20hyperparameter%20settings%20for%20the%20historical%20road%20extraction%20task.%20We%20used%20a%20dataset%20including%207076%20patches%20with%20the%20size%20of%20%24256%20%5C%5Ctimes256%24%20pixels%20generated%20from%20scanned%20historical%20Deutsche%20Heereskarte%201%3A200%20000%20T%5Cu00fcrkei%20%28DHK%20200%20Turkey%29%20maps%20and%20their%20corresponding%20digitized%20ground%20truth%20masks%20for%20five%20different%20roads%20types.%20We%20first%20tested%20the%20widely%20used%20Unet%2B%2B%20and%20Deeplabv3%20architectures.%20We%20also%20evaluated%20the%20contribution%20of%20attention%20models%20by%20implementing%20Unet%2B%2B%20with%20the%20concurrent%20spatial%20and%20channel-squeeze%20and%20excitation%20block%20and%20multiscale%20attention%20net.%20We%20achieved%20the%20best%20results%20with%20split-attention%20network%20%28Timm-resnest200e%29%20encoder%20and%20Unet%2B%2B%20architecture%2C%20with%2098.99%25%20overall%20accuracy%2C%2041.99%25%20intersection%20of%20union%2C%2051.41%25%20precision%2C%2069.7%25%20recall%2C%20and%2057.72%25%20F1%20score%20values.%20Our%20output%20weights%20could%20be%20directly%20used%20for%20the%20inference%20of%20other%20DHK%20maps%20and%20transfer%20learning%20for%20similar%20or%20different%20historical%20maps.%20The%20proposed%20architecture%20could%20also%20be%20implemented%20in%20different%20road%20extraction%20studies.%22%2C%22date%22%3A%222022%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FLGRS.2022.3204817%22%2C%22ISSN%22%3A%221558-0571%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9882054%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A14%3A59Z%22%7D%7D%2C%7B%22key%22%3A%227ZD65BY7%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Mao%20et%20al.%22%2C%22parsedDate%22%3A%222021-10-01%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EMao%2C%20X.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS136481522100178X%27%3EDeep%20learning-enhanced%20extraction%20of%20drainage%20networks%20from%20digital%20elevation%20models%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep%20learning-enhanced%20extraction%20of%20drainage%20networks%20from%20digital%20elevation%20models%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xin%22%2C%22lastName%22%3A%22Mao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jun%20Kang%22%2C%22lastName%22%3A%22Chow%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhaoyu%22%2C%22lastName%22%3A%22Su%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yu-Hsing%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiaye%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tao%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tiejian%22%2C%22lastName%22%3A%22Li%22%7D%5D%2C%22abstractNote%22%3A%22Drainage%20network%20extraction%20is%20essential%20for%20different%20research%20and%20applications.%20However%2C%20traditional%20methods%20have%20low%20efficiency%2C%20low%20accuracy%20for%20flat%20regions%2C%20and%20difficulties%20in%20detecting%20channel%20heads.%20Although%20deep%20learning%20techniques%20have%20been%20used%20to%20solve%20these%20problems%2C%20different%20challenges%20remain%20unsolved.%20Therefore%2C%20we%20introduced%20distributed%20representations%20of%20aspect%20features%20to%20facilitate%20the%20deep%20learning%20model%20calculating%20the%20flow%20direction%3B%20adopted%20a%20semantic%20segmentation%20model%2C%20U-Net%2C%20to%20improve%20the%20accuracy%20and%20efficiency%20in%20predicting%20flow%20directions%20and%20in%20pixel%20classifications%3B%20and%20used%20postprocessing%20to%20delineate%20the%20flowlines.%20Our%20proposed%20framework%20achieved%20state-of-the-art%20results%20compared%20with%20the%20traditional%20methods%20and%20the%20published%20deep-learning-based%20methods.%20Further%2C%20case%20study%20results%20demonstrated%20that%20our%20framework%20can%20extract%20drainage%20networks%20with%20high%20accuracy%20for%20rivers%20of%20different%20widths%20flowing%20through%20terrains%20of%20different%20characteristics.%20This%20framework%2C%20requiring%20no%20parameters%20provided%20by%20users%2C%20can%20also%20produce%20waterbody%20polygons%20and%20allow%20cyclic%20graphs%20in%20the%20drainage%20network.%22%2C%22date%22%3A%222021-10-01%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.envsoft.2021.105135%22%2C%22ISSN%22%3A%221364-8152%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS136481522100178X%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A12%3A41Z%22%7D%7D%2C%7B%22key%22%3A%22EZZFBNKQ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ekim%20et%20al.%22%2C%22parsedDate%22%3A%222021-08%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EEkim%2C%20B.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F10%5C%2F8%5C%2F492%27%3EAutomatic%20Road%20Extraction%20from%20Historical%20Maps%20Using%20Deep%20Learning%20Techniques%3A%20A%20Regional%20Case%20Study%20of%20Turkey%20in%20a%20German%20World%20War%20II%20Map%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Automatic%20Road%20Extraction%20from%20Historical%20Maps%20Using%20Deep%20Learning%20Techniques%3A%20A%20Regional%20Case%20Study%20of%20Turkey%20in%20a%20German%20World%20War%20II%20Map%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Burak%22%2C%22lastName%22%3A%22Ekim%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Elif%22%2C%22lastName%22%3A%22Sertel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22M.%20Erdem%22%2C%22lastName%22%3A%22Kabaday%5Cu0131%22%7D%5D%2C%22abstractNote%22%3A%22Scanned%20historical%20maps%20are%20available%20from%20different%20sources%20in%20various%20scales%20and%20contents.%20Automatic%20geographical%20feature%20extraction%20from%20these%20historical%20maps%20is%20an%20essential%20task%20to%20derive%20valuable%20spatial%20information%20on%20the%20characteristics%20and%20distribution%20of%20transportation%20infrastructures%20and%20settlements%20and%20to%20conduct%20quantitative%20and%20geometrical%20analysis.%20In%20this%20research%2C%20we%20used%20the%20Deutsche%20Heereskarte%201%3A200%2C000%20T%5Cu00fcrkei%20%28DHK%20200%20Turkey%29%20maps%20as%20the%20base%20geoinformation%20source%20to%20construct%20the%20past%20transportation%20networks%20using%20the%20deep%20learning%20approach.%20Five%20different%20road%20types%20were%20digitized%20and%20labeled%20to%20be%20used%20as%20inputs%20for%20the%20proposed%20deep%20learning-based%20segmentation%20approach.%20We%20adapted%20U-Net%2B%2B%20and%20ResneXt50_32%5Cu00d74d%20architectures%20to%20produce%20multi-class%20segmentation%20masks%20and%20perform%20feature%20extraction%20to%20determine%20various%20road%20types%20accurately.%20We%20achieved%20remarkable%20results%2C%20with%2098.73%25%20overall%20accuracy%2C%2041.99%25%20intersection%20of%20union%2C%20and%2046.61%25%20F1%20score%20values.%20The%20proposed%20method%20can%20be%20implemented%20in%20DHK%20maps%20of%20different%20countries%20to%20automatically%20extract%20different%20road%20types%20and%20used%20for%20transfer%20learning%20of%20different%20historical%20maps.%22%2C%22date%22%3A%222021%5C%2F8%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi10080492%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F10%5C%2F8%5C%2F492%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A50%3A40Z%22%7D%7D%2C%7B%22key%22%3A%2283R7FCSV%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Satari%20et%20al.%22%2C%22parsedDate%22%3A%222021-06-04%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ESatari%2C%20R.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fagile-giss.copernicus.org%5C%2Farticles%5C%2F2%5C%2F11%5C%2F2021%5C%2F%27%3EExtraction%20of%20linear%20structures%20from%20digital%20terrain%20models%20using%20deep%20learning%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Extraction%20of%20linear%20structures%20from%20digital%20terrain%20models%20using%20deep%20learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ramish%22%2C%22lastName%22%3A%22Satari%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bashir%22%2C%22lastName%22%3A%22Kazimi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Monika%22%2C%22lastName%22%3A%22Sester%22%7D%5D%2C%22abstractNote%22%3A%22%3Cp%3E%3Cstrong%20class%3D%5C%22journal-contentHeaderColor%5C%22%3EAbstract.%3C%5C%2Fstrong%3E%20This%20paper%20explores%20the%20role%20deep%20convolutional%20neural%20networks%20play%20in%20automated%20extraction%20of%20linear%20structures%20using%20semantic%20segmentation%20techniques%20in%20Digital%20Terrain%20Models%20%28DTMs%29.%20DTM%20is%20a%20regularly%20gridded%20raster%20created%20from%20laser%20scanning%20point%20clouds%20and%20represents%20elevations%20of%20the%20bare%20earth%20surface%20with%20respect%20to%20a%20reference.%20Recent%20advances%20in%20Deep%20Learning%20%28DL%29%20have%20made%20it%20possible%20to%20explore%20the%20use%20of%20semantic%20segmentation%20for%20detection%20of%20terrain%20structures%20in%20DTMs.%20This%20research%20examines%20two%20novel%20and%20practical%20deep%20convolutional%20neural%20network%20architectures%20i.e.%20an%20encoder-decoder%20network%20named%20as%20SegNet%20and%20the%20recent%20state-of-the-art%20high-resolution%20network%20%28HRNet%29.%20This%20paper%20initially%20focuses%20on%20the%20pixel-wise%20binary%20classification%20in%20order%20to%20validate%20the%20applicability%20of%20the%20proposed%20approaches.%20The%20networks%20are%20trained%20to%20distinguish%20between%20points%20belonging%20to%20linear%20structures%20and%20those%20belonging%20to%20background.%20In%20the%20second%20step%2C%20multi-class%20segmentation%20is%20carried%20out%20on%20the%20same%20DTM%20dataset.%20The%20model%20is%20trained%20to%20not%20only%20detect%20a%20linear%20feature%2C%20but%20also%20to%20categorize%20it%20as%20one%20of%20the%20classes%3A%20hollow%20ways%2C%20roads%2C%20forest%20paths%2C%20historical%20paths%2C%20and%20streams.%20Results%20of%20the%20experiment%20in%20addition%20to%20the%20quantitative%20and%20qualitative%20analysis%20show%20the%20applicability%20of%20deep%20neural%20networks%20for%20detection%20of%20terrain%20structures%20in%20DTMs.%20From%20the%20deep%20learning%20models%20utilized%2C%20HRNet%20gives%20better%20results.%3C%5C%2Fp%3E%20%3Cp%3E%20%3Ca%20target%3D%5C%22_blank%5C%22%20href%3D%5C%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.17605%5C%2Fosf.io%5C%2F2sc7g%5C%22%3E%3Cimg%20width%3D%5C%22150px%5C%22%20cofileid%3D%5C%22779365%5C%22%20src%3D%5C%22https%3A%5C%2F%5C%2Fcontentmanager.copernicus.org%5C%2F779365%5C%2F10%5C%2Flocale%5C%2Fssl%5C%22%20%5C%2F%3E%3C%5C%2Fa%3E%3C%5C%2Fp%3E%22%2C%22date%22%3A%222021%5C%2F06%5C%2F04%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.5194%5C%2Fagile-giss-2-11-2021%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fagile-giss.copernicus.org%5C%2Farticles%5C%2F2%5C%2F11%5C%2F2021%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A40%3A38Z%22%7D%7D%2C%7B%22key%22%3A%22DN5EFLS9%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yang%20et%20al.%22%2C%22parsedDate%22%3A%222021%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EYang%2C%20X.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Flink.springer.com%5C%2Fchapter%5C%2F10.1007%5C%2F978-3-030-67540-0_12%27%3ET2I-CycleGAN%3A%20A%20CycleGAN%20for%20Maritime%20Road%20Network%20Extraction%20from%20Crowdsourcing%20Spatio-Temporal%20AIS%20Trajectory%20Data%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22T2I-CycleGAN%3A%20A%20CycleGAN%20for%20Maritime%20Road%20Network%20Extraction%20from%20Crowdsourcing%20Spatio-Temporal%20AIS%20Trajectory%20Data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xuankai%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guiling%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiahao%22%2C%22lastName%22%3A%22Yan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jing%22%2C%22lastName%22%3A%22Gao%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Honghao%22%2C%22lastName%22%3A%22Gao%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Xinheng%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Muddesar%22%2C%22lastName%22%3A%22Iqbal%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Yuyu%22%2C%22lastName%22%3A%22Yin%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Jianwei%22%2C%22lastName%22%3A%22Yin%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Ning%22%2C%22lastName%22%3A%22Gu%22%7D%5D%2C%22abstractNote%22%3A%22Maritime%20road%20network%20is%20composed%20of%20detailed%20maritime%20routes%20and%20is%20vital%20in%20many%20applications%20such%20as%20threats%20detection%2C%20traffic%20control.%20However%2C%20the%20vessel%20trajectory%20data%2C%20or%20Automatic%20Identification%20System%20%28AIS%29%20data%2C%20are%20usually%20large%20in%20scale%20and%20collected%20with%20different%20sampling%20rates.%20And%2C%20what%5Cu2019s%20more%2C%20it%20is%20difficult%20to%20obtain%20enough%20accurate%20road%20networks%20as%20paired%20training%20datasets.%20It%20is%20a%20huge%20challenge%20to%20extract%20a%20complete%20maritime%20road%20network%20from%20such%20data%20that%20matches%20the%20actual%20route%20of%20the%20ship.%20In%20order%20to%20solve%20these%20problems%2C%20this%20paper%20proposes%20an%20unsupervised%20learning-based%20maritime%20road%20network%20extraction%20model%20T2I-CycleGAN%20based%20on%20CycleGAN.%20The%20method%20translates%20trajectory%20data%20into%20unpaired%20input%20samples%20for%20model%20training%2C%20and%20adds%20dense%20layer%20to%20the%20CycleGAN%20model%20to%20handle%20trajectories%20with%20different%20sampling%20rates.%20We%20evaluate%20the%20approach%20on%20real-world%20AIS%20datasets%20in%20various%20areas%20and%20compare%20the%20extracted%20results%20with%20the%20real%20ship%20coordinate%20data%20in%20terms%20of%20connectivity%20and%20details%2C%20achieving%20effectiveness%20beyond%20the%20most%20related%20work.%22%2C%22date%22%3A%222021%22%2C%22proceedingsTitle%22%3A%22Collaborative%20Computing%3A%20Networking%2C%20Applications%20and%20Worksharing%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2F978-3-030-67540-0_12%22%2C%22ISBN%22%3A%22978-3-030-67540-0%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Flink.springer.com%5C%2Fchapter%5C%2F10.1007%5C%2F978-3-030-67540-0_12%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A13%3A32Z%22%7D%7D%2C%7B%22key%22%3A%22EVF393MF%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Petitpierre%20et%20al.%22%2C%22parsedDate%22%3A%222021%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EPetitpierre%2C%20R.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fceur-ws.org%5C%2FVol-2989%5C%2F%27%3EGeneric%20Semantic%20Segmentation%20of%20Historical%20Maps%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Generic%20Semantic%20Segmentation%20of%20Historical%20Maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22R%5Cu00e9mi%22%2C%22lastName%22%3A%22Petitpierre%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fr%5Cu00e9d%5Cu00e9ric%22%2C%22lastName%22%3A%22Kaplan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Isabella%22%2C%22lastName%22%3A%22di%20Lenardo%22%7D%5D%2C%22abstractNote%22%3A%22Research%20in%20automatic%20map%20processing%20is%20largely%20focused%20on%20homogeneous%20corpora%20or%20even%20individual%20maps%2C%20leading%20to%20inflexible%20models.%20Based%20on%20two%20new%20corpora%2C%20the%20first%20one%20centered%20on%20maps%20of%20Paris%20and%20the%20second%20one%20gathering%20maps%20of%20cities%20from%20all%20over%20the%20world%2C%20we%20present%20a%20method%20for%20computing%20the%20figurative%20diversity%20of%20cartographic%20collections.%20In%20a%20second%20step%2C%20we%20discuss%20the%20actual%20opportunities%20for%20CNN-based%20semantic%20segmentation%20of%20historical%20city%20maps.%20Through%20several%20experiments%2C%20we%20analyze%20the%20impact%20of%20figurative%20and%20cultural%20diversity%20on%20the%20segmentation%20performance.%20Finally%2C%20we%20highlight%20the%20potential%20for%20large-scale%20and%20generic%20algorithms.%20Training%20data%20and%20code%20of%20the%20described%20algorithms%20are%20made%20open-source%20and%20published%20with%20this%20article.%22%2C%22date%22%3A%222021%22%2C%22proceedingsTitle%22%3A%22CEUR%20Workshop%20Proceedings%22%2C%22conferenceName%22%3A%22CHR%202021%3A%20Computational%20Humanities%20Research%20Conference%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fceur-ws.org%5C%2FVol-2989%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A34%3A57Z%22%7D%7D%2C%7B%22key%22%3A%22VKD4NKLL%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ma%20et%20al.%22%2C%22parsedDate%22%3A%222020-11-10%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EMa%2C%20J.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.spiedigitallibrary.org%5C%2Fconference-proceedings-of-spie%5C%2F11584%5C%2F115841J%5C%2FAutomatic-identification-method-of-overpasses-based-on-deep-learning%5C%2F10.1117%5C%2F12.2579387.full%27%3EAutomatic%20identification%20method%20of%20overpasses%20based%20on%20deep%20learning%3C%5C%2Fa%3E.%202020%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Automatic%20identification%20method%20of%20overpasses%20based%20on%20deep%20learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jingzhen%22%2C%22lastName%22%3A%22Ma%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bowei%22%2C%22lastName%22%3A%22Wen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fubing%22%2C%22lastName%22%3A%22Zhang%22%7D%5D%2C%22abstractNote%22%3A%22The%20automatic%20identification%20of%20overpass%20structures%20is%20of%20great%20significance%20for%20multi-scale%20modeling%2C%20spatial%20analysis%2C%20and%20vehicle%20navigation%20of%20road%20networks.%20The%20traditional%20method%20of%20overpass%20recognition%20based%20on%20vector%20data%20relies%20too%20heavily%20on%20the%20characteristics%20of%20manual%20design%20and%20has%20poor%20adaptability%20to%20complex%20scenes.%20In%20this%20paper%2C%20a%20method%20for%20overpass%20identification%20based%20on%20the%20target%20detection%20model%20Faster%20R-CNN%20%28Regions%20with%20Convolutional%20Neural%20Network%29%20is%20proposed.%20This%20method%20uses%20a%20Convolutional%20Neural%20Network%20to%20learn%20the%20deep%20structural%20characteristics%20of%20data%20samples%2C%20and%20then%20automatically%20identifies%20and%20finds%20accurate%20positioning%20of%20the%20overpasses.%20The%20experimental%20results%20show%20that%20this%20method%20is%20able%20to%20identify%20overpasses%20and%20can%20accurately%20determine%20their%20positions%20in%20a%20complex%20road%20network%2C%20avoiding%20the%20influence%20of%20human%20intervention%20on%20the%20uncertainty%20of%20results.%20This%20method%20also%20has%20strong%20anti-interference%20abilities%22%2C%22date%22%3A%222020%5C%2F11%5C%2F10%22%2C%22proceedingsTitle%22%3A%222020%20International%20Conference%20on%20Image%2C%20Video%20Processing%20and%20Artificial%20Intelligence%22%2C%22conferenceName%22%3A%222020%20International%20Conference%20on%20Image%2C%20Video%20Processing%20and%20Artificial%20Intelligence%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1117%5C%2F12.2579387%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.spiedigitallibrary.org%5C%2Fconference-proceedings-of-spie%5C%2F11584%5C%2F115841J%5C%2FAutomatic-identification-method-of-overpasses-based-on-deep-learning%5C%2F10.1117%5C%2F12.2579387.full%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A10%3A51Z%22%7D%7D%5D%7D
Ran, W. et al. Raster Map Line Element Extraction Method Based on Improved U-Net Network. 2022
Avcı, C. et al. Deep Learning-Based Road Extraction From Historical Maps. 2022
Mao, X. et al. Deep learning-enhanced extraction of drainage networks from digital elevation models. 2021
Satari, R. et al. Extraction of linear structures from digital terrain models using deep learning. 2021
Petitpierre, R. et al. Generic Semantic Segmentation of Historical Maps. 2021
Ma, J. et al. Automatic identification method of overpasses based on deep learning. 2020
Feature Extraction (Polygons)
5447768
feature extraction, polygons
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22AM22HQ3Z%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Xia%20et%20al.%22%2C%22parsedDate%22%3A%222024-05-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EXia%2C%20X.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1569843224001912%27%3EVectorizing%20historical%20maps%20with%20topological%20consistency%3A%20A%20hybrid%20approach%20using%20transformers%20and%20contour-based%20instance%20segmentation%3C%5C%2Fa%3E.%202024%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Vectorizing%20historical%20maps%20with%20topological%20consistency%3A%20A%20hybrid%20approach%20using%20transformers%20and%20contour-based%20instance%20segmentation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xue%22%2C%22lastName%22%3A%22Xia%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tao%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Magnus%22%2C%22lastName%22%3A%22Heitzler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22Reducing%20the%20complexity%20of%20the%20workflow%20for%20historical%20map%20vectorization%20is%20essential%20to%20promote%20the%20widespread%20utilization%20of%20historical%20spatial%20data.%20Traditional%20pixel-wise%20segmentation%20followed%20by%20vectorization%20workflows%20suffer%20from%20tedious%20post-processing%20steps.%20To%20address%20this%20challenge%2C%20we%20introduce%20an%20innovative%20pure%20vector-based%20workflow.%20This%20workflow%20predicts%20object%20contours%20in%20vector%20format%20by%20assembling%20geometric%20primitives%2C%20such%20as%20line%20segments%2C%20in%20the%20correct%20order%20to%20form%20closed%20polygons.%20Consequently%2C%20the%20need%20for%20additional%20post-processing%20steps%2C%20typically%20associated%20with%20raster-to-vector%20data%20conversion%2C%20is%20eliminated.%20Furthermore%2C%20we%20have%20curated%20a%20publicly%20available%20historical%20map%20dataset%20called%20Sanborn-Vector%2C%20which%20holds%20significant%20potential%20for%20future%20research%20on%20vector-based%20historical%20map%20processing%20methods.%20To%20address%20the%20lack%20of%20suitable%20evaluation%20metrics%20for%20vector-based%20techniques%2C%20we%20have%20introduced%20a%20novel%20metric%20called%20structural%20panoptic%20quality%20%28sPQ%29.%20This%20metric%20takes%20into%20account%20both%20the%20shape%20and%20positional%20accuracy%20of%20the%20vector%20output.%20Applying%20our%20proposed%20workflow%20to%20detect%20building%20instances%20from%20Sanborn%20maps%20has%20yielded%20simplified%20and%20intersection-free%20polygonal%20representations.%20We%20believe%20that%20our%20proposed%20workflow%20offers%20a%20fresh%20perspective%20on%20vectorizing%20historical%20maps%2C%20opening%20up%20new%20possibilities%20in%20this%20field.%22%2C%22date%22%3A%222024-05-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.jag.2024.103837%22%2C%22ISSN%22%3A%221569-8432%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1569843224001912%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-04-25T13%3A26%3A25Z%22%7D%7D%2C%7B%22key%22%3A%22A67KRRSU%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22%5Cu0160anca%20et%20al.%22%2C%22parsedDate%22%3A%222023-06-22%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3E%5Cu0160anca%2C%20S.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fisprs-archives.copernicus.org%5C%2Farticles%5C%2FXLVIII-4-W7-2023%5C%2F169%5C%2F2023%5C%2F%27%3EAN%20END-TO-END%20DEEP%20LEARNING%20WORKFLOW%20FOR%20BUILDING%20SEGMENTATION%2C%20BOUNDARY%20REGULARIZATION%20AND%20VECTORIZATION%20OF%20BUILDING%20FOOTPRINTS%3C%5C%2Fa%3E.%202023%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22AN%20END-TO-END%20DEEP%20LEARNING%20WORKFLOW%20FOR%20BUILDING%20SEGMENTATION%2C%20BOUNDARY%20REGULARIZATION%20AND%20VECTORIZATION%20OF%20BUILDING%20FOOTPRINTS%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22S.%22%2C%22lastName%22%3A%22%5Cu0160anca%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22S.%22%2C%22lastName%22%3A%22Jyhne%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22M.%22%2C%22lastName%22%3A%22Gazzea%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22R.%22%2C%22lastName%22%3A%22Arghandeh%22%7D%5D%2C%22abstractNote%22%3A%22Automatic%20building%20footprint%20extraction%20from%20remote%20sensing%20imagery%20is%20a%20widely%20used%20method%2C%20with%20deep%20learning%20techniques%20being%20particularly%20effective.%20However%2C%20deep%20learning%20approaches%20still%20require%20additional%20post-processing%20steps%20due%20to%20pixel-wise%20predictions%2C%20that%20contribute%20to%20occluded%20and%20geometrically%20incorrectly%20segmented%20buildings.%20To%20address%20this%20issue%2C%20we%20propose%20an%20end-to-end%20workflow%20that%20utilizes%20binary%20semantic%20segmentation%2C%20regularization%2C%20and%20vectorization.%20We%20implement%20and%20assess%20the%20performance%20of%20four%20convolutional%20neural%20network%20architectures%20including%20U-Net%2C%20U-NetFormer%2C%20FT-UnetFormer%2C%20and%20DCSwin%20on%20the%20MapAI%20Precision%20in%20Building%20Segmentation%20competition.%20To%20additionally%20improve%20the%20shape%20of%20the%20predicted%20buildings%20we%20apply%20regularization%20on%20the%20predictions%20to%20assess%20whether%20regularization%20further%20improves%20the%20geometrical%20shape%20and%20improve%20the%20prediction%20accuracy.%20We%20aim%20to%20produce%20accurate%20predictions%20with%20regularized%20boundaries%20that%20can%20prove%20useful%20in%20many%20cartographic%20and%20engineering%20applications.%20The%20regularization%20and%20vectorization%20workflow%20is%20further%20developed%20into%20a%20working%20QGIS-plugin%20that%20can%20be%20used%20to%20extend%20the%20functionality%20of%20QGIS.%20Our%20aim%20is%20to%20provide%20an%20end-to-end%20workflow%20for%20building%20segmentation%2C%20regularization%20and%20vectorization.%22%2C%22date%22%3A%222023-06-22%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.5194%5C%2Fisprs-archives-XLVIII-4-W7-2023-169-2023%22%2C%22ISSN%22%3A%221682-1750%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fisprs-archives.copernicus.org%5C%2Farticles%5C%2FXLVIII-4-W7-2023%5C%2F169%5C%2F2023%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-11-15T18%3A36%3A15Z%22%7D%7D%2C%7B%22key%22%3A%228YFZ6DRN%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Mai%20et%20al.%22%2C%22parsedDate%22%3A%222023-04-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EMai%2C%20G.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs10707-022-00481-2%27%3ETowards%20general-purpose%20representation%20learning%20of%20polygonal%20geometries%3C%5C%2Fa%3E.%202023%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Towards%20general-purpose%20representation%20learning%20of%20polygonal%20geometries%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gengchen%22%2C%22lastName%22%3A%22Mai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chiyu%22%2C%22lastName%22%3A%22Jiang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weiwei%22%2C%22lastName%22%3A%22Sun%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rui%22%2C%22lastName%22%3A%22Zhu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yao%22%2C%22lastName%22%3A%22Xuan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ling%22%2C%22lastName%22%3A%22Cai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Krzysztof%22%2C%22lastName%22%3A%22Janowicz%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stefano%22%2C%22lastName%22%3A%22Ermon%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ni%22%2C%22lastName%22%3A%22Lao%22%7D%5D%2C%22abstractNote%22%3A%22Neural%20network%20representation%20learning%20for%20spatial%20data%20%28e.g.%2C%20points%2C%20polylines%2C%20polygons%2C%20and%20networks%29%20is%20a%20common%20need%20for%20geographic%20artificial%20intelligence%20%28GeoAI%29%20problems.%20In%20recent%20years%2C%20many%20advancements%20have%20been%20made%20in%20representation%20learning%20for%20points%2C%20polylines%2C%20and%20networks%2C%20whereas%20little%20progress%20has%20been%20made%20for%20polygons%2C%20especially%20complex%20polygonal%20geometries.%20In%20this%20work%2C%20we%20focus%20on%20developing%20a%20general-purpose%20polygon%20encoding%20model%2C%20which%20can%20encode%20a%20polygonal%20geometry%20%28with%20or%20without%20holes%2C%20single%20or%20multipolygons%29%20into%20an%20embedding%20space.%20The%20result%20embeddings%20can%20be%20leveraged%20directly%20%28or%20finetuned%29%20for%20downstream%20tasks%20such%20as%20shape%20classification%2C%20spatial%20relation%20prediction%2C%20building%20pattern%20classification%2C%20cartographic%20building%20generalization%2C%20and%20so%20on.%20To%20achieve%20model%20generalizability%20guarantees%2C%20we%20identify%20a%20few%20desirable%20properties%20that%20the%20encoder%20should%20satisfy%3A%20loop%20origin%20invariance%2C%20trivial%20vertex%20invariance%2C%20part%20permutation%20invariance%2C%20and%20topology%20awareness.%20We%20explore%20two%20different%20designs%20for%20the%20encoder%3A%20one%20derives%20all%20representations%20in%20the%20spatial%20domain%20and%20can%20naturally%20capture%20local%20structures%20of%20polygons%3B%20the%20other%20leverages%20spectral%20domain%20representations%20and%20can%20easily%20capture%20global%20structures%20of%20polygons.%20For%20the%20spatial%20domain%20approach%20we%20propose%20ResNet1D%2C%20a%201D%20CNN-based%20polygon%20encoder%2C%20which%20uses%20circular%20padding%20to%20achieve%20loop%20origin%20invariance%20on%20simple%20polygons.%20For%20the%20spectral%20domain%20approach%20we%20develop%20NUFTspec%20based%20on%20Non-Uniform%20Fourier%20Transformation%20%28NUFT%29%2C%20which%20naturally%20satisfies%20all%20the%20desired%20properties.%20We%20conduct%20experiments%20on%20two%20different%20tasks%3A%201%29%20polygon%20shape%20classification%20based%20on%20the%20commonly%20used%20MNIST%20dataset%3B%202%29%20polygon-based%20spatial%20relation%20prediction%20based%20on%20two%20new%20datasets%20%28DBSR-46K%20and%20DBSR-cplx46K%29%20constructed%20from%20OpenStreetMap%20and%20DBpedia.%20Our%20results%20show%20that%20NUFTspec%20and%20ResNet1D%20outperform%20multiple%20existing%20baselines%20with%20significant%20margins.%20While%20ResNet1D%20suffers%20from%20model%20performance%20degradation%20after%20shape-invariance%20geometry%20modifications%2C%20NUFTspec%5Cu00a0is%20very%20robust%20to%20these%20modifications%20due%20to%20the%20nature%20of%20the%20NUFT%20representation.%20NUFTspec%20is%20able%20to%20jointly%20consider%20all%20parts%20of%20a%20multipolygon%20and%20their%20spatial%20relations%20during%20prediction%20while%20ResNet1D%20can%20recognize%20the%20shape%20details%20which%20are%20sometimes%20important%20for%20classification.%20This%20result%20points%20to%20a%20promising%20research%20direction%20of%20combining%20spatial%20and%20spectral%20representations.%22%2C%22date%22%3A%222023-04-01%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2Fs10707-022-00481-2%22%2C%22ISSN%22%3A%221573-7624%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs10707-022-00481-2%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-20T17%3A03%3A15Z%22%7D%7D%2C%7B%22key%22%3A%22GXVJH7RC%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Xydas%20et%20al.%22%2C%22parsedDate%22%3A%222022-10-19%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EXydas%2C%20C.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.scitepress.org%5C%2FLink.aspx%3Fdoi%3D10.5220%5C%2F0010839700003124%27%3EBuildings%20Extraction%20from%20Historical%20Topographic%20Maps%20via%20a%20Deep%20Convolution%20Neural%20Network%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Buildings%20Extraction%20from%20Historical%20Topographic%20Maps%20via%20a%20Deep%20Convolution%20Neural%20Network%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Christos%22%2C%22lastName%22%3A%22Xydas%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Anastasios%22%2C%22lastName%22%3A%22Kesidis%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kleomenis%22%2C%22lastName%22%3A%22Kalogeropoulos%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Andreas%22%2C%22lastName%22%3A%22Tsatsaris%22%7D%5D%2C%22abstractNote%22%3A%22Digital%20Library%22%2C%22date%22%3A%222022-10-19%22%2C%22proceedingsTitle%22%3A%22%22%2C%22conferenceName%22%3A%2217th%20International%20Conference%20on%20Computer%20Vision%20Theory%20and%20Applications%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.5220%5C%2F0010839700003124%22%2C%22ISBN%22%3A%22978-989-758-555-5%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.scitepress.org%5C%2FLink.aspx%3Fdoi%3D10.5220%5C%2F0010839700003124%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A50%3A43Z%22%7D%7D%2C%7B%22key%22%3A%22PBQSRV5B%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Farmakis-Serebryakova%20et%20al.%22%2C%22parsedDate%22%3A%222022-07%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EFarmakis-Serebryakova%2C%20M.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F7%5C%2F395%27%3ETerrain%20Segmentation%20Using%20a%20U-Net%20for%20Improved%20Relief%20Shading%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Terrain%20Segmentation%20Using%20a%20U-Net%20for%20Improved%20Relief%20Shading%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Marianna%22%2C%22lastName%22%3A%22Farmakis-Serebryakova%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Magnus%22%2C%22lastName%22%3A%22Heitzler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22Since%20landforms%20composing%20land%20surface%20vary%20in%20their%20properties%20and%20appearance%2C%20their%20shaded%20reliefs%20also%20present%20different%20visual%20impression%20of%20the%20terrain.%20In%20this%20work%2C%20we%20adapt%20a%20U-Net%20so%20that%20it%20can%20recognize%20a%20selection%20of%20landforms%20and%20can%20segment%20terrain.%20We%20test%20the%20efficiency%20of%2010%20separate%20models%20and%20apply%20an%20ensemble%20approach%2C%20where%20all%20the%20models%20are%20combined%20to%20potentially%20outperform%20single%20models.%20Our%20algorithm%20works%20particularly%20well%20for%20block%20mountains%2C%20Prealps%2C%20valleys%2C%20and%20hills%2C%20delivering%20average%20precision%20and%20f1%20values%20above%2060%25.%20Segmenting%20plateaus%20and%20folded%20mountains%20is%20more%20challenging%2C%20and%20their%20precision%20values%20are%20rather%20scattered%20due%20to%20smaller%20areas%20available%20for%20training.%20Mountains%20formed%20by%20erosion%20processes%20are%20the%20least%20recognized%20landform%20of%20all%20because%20of%20their%20similarities%20with%20other%20landforms.%20The%20highest%20accuracy%20of%20one%20of%20the%2010%20models%20is%2065%25%2C%20while%20the%20accuracy%20of%20the%20ensemble%20is%2061%25.%20We%20apply%20relief%20shading%20techniques%20that%20were%20found%20to%20be%20efficient%20regarding%20specific%20landforms%20within%20corresponding%20segmented%20areas%20and%20blend%20them%20together.%20Finally%2C%20we%20test%20the%20trained%20model%20with%20the%20best%20accuracy%20on%20other%20mountainous%20areas%20around%20the%20world%2C%20and%20it%20proves%20to%20work%20in%20other%20regions%20beyond%20the%20training%20area.%22%2C%22date%22%3A%222022%5C%2F7%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi11070395%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F7%5C%2F395%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A55%3A01Z%22%7D%7D%2C%7B%22key%22%3A%22DAGFT8K6%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Du%20et%20al.%22%2C%22parsedDate%22%3A%222022-01%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EDu%2C%20K.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F1424-8220%5C%2F22%5C%2F19%5C%2F7594%27%3EComparison%20of%20RetinaNet-Based%20Single-Target%20Cascading%20and%20Multi-Target%20Detection%20Models%20for%20Administrative%20Regions%20in%20Network%20Map%20Pictures%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Comparison%20of%20RetinaNet-Based%20Single-Target%20Cascading%20and%20Multi-Target%20Detection%20Models%20for%20Administrative%20Regions%20in%20Network%20Map%20Pictures%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kaixuan%22%2C%22lastName%22%3A%22Du%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xianghong%22%2C%22lastName%22%3A%22Che%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yong%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiping%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22An%22%2C%22lastName%22%3A%22Luo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ruiyuan%22%2C%22lastName%22%3A%22Ma%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shenghua%22%2C%22lastName%22%3A%22Xu%22%7D%5D%2C%22abstractNote%22%3A%22There%20is%20a%20critical%20need%20for%20detection%20of%20administrative%20regions%20through%20network%20map%20pictures%20in%20map%20censorship%20tasks%2C%20which%20can%20be%20implemented%20by%20target%20detection%20technology.%20However%2C%20on%20map%20images%20there%20tend%20to%20be%20numerous%20administrative%20regions%20overlaying%20map%20annotations%20and%20symbols%2C%20thus%20making%20it%20difficult%20to%20accurately%20detect%20each%20region.%20Using%20a%20RetinaNet-based%20target%20detection%20model%20integrating%20ResNet50%20and%20a%20feature%20pyramid%20network%20%28FPN%29%2C%20this%20study%20built%20a%20multi-target%20model%20and%20a%20single-target%20cascading%20model%20from%20three%20single-target%20models%20by%20taking%20Taiwan%2C%20Tibet%2C%20and%20the%20Chinese%20mainland%20as%20target%20examples.%20Two%20models%20were%20evaluated%20both%20in%20classification%20and%20localization%20accuracy%20to%20investigate%20their%20administrative%20region%20detection%20performance.%20The%20results%20show%20that%20the%20single-target%20cascading%20model%20was%20able%20to%20detect%20more%20administrative%20regions%2C%20with%20a%20higher%20f1_score%20of%200.86%20and%20mAP%20of%200.85%20compared%20to%20the%20multi-target%20model%20%280.56%20and%200.52%2C%20respectively%29.%20Furthermore%2C%20location%20box%20size%20distribution%20from%20the%20single-target%20cascading%20model%20looks%20more%20similar%20to%20that%20of%20manually%20annotated%20box%20sizes%2C%20which%20signifies%20that%20the%20proposed%20cascading%20model%20is%20superior%20to%20the%20multi-target%20model.%20This%20study%20is%20promising%20in%20providing%20support%20for%20computer%20map%20reading%20and%20intelligent%20map%20censorship.%22%2C%22date%22%3A%222022%5C%2F1%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fs22197594%22%2C%22ISSN%22%3A%221424-8220%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F1424-8220%5C%2F22%5C%2F19%5C%2F7594%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A50%3A25Z%22%7D%7D%2C%7B%22key%22%3A%22SRS2PMJT%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Soliman%20et%20al.%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ESoliman%2C%20A.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9780077%27%3EWeakly%20Supervised%20Segmentation%20of%20Buildings%20in%20Digital%20Elevation%20Models%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Weakly%20Supervised%20Segmentation%20of%20Buildings%20in%20Digital%20Elevation%20Models%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Aiman%22%2C%22lastName%22%3A%22Soliman%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yifan%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shirui%22%2C%22lastName%22%3A%22Luo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rauf%22%2C%22lastName%22%3A%22Makharov%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Volodymyr%22%2C%22lastName%22%3A%22Kindratenko%22%7D%5D%2C%22abstractNote%22%3A%22The%20lack%20of%20quality%20label%20data%20is%20considered%20one%20of%20the%20main%20bottlenecks%20for%20training%20machine%20and%20deep%20learning%20%28DL%29%20models.%20Weakly%20supervised%20learning%20using%20incomplete%2C%20coarse%2C%20or%20inaccurate%20data%20is%20an%20alternative%20strategy%20to%20overcome%20the%20scarcity%20of%20training%20data.%20We%20trained%20a%20U-Net%20model%20for%20segmenting%20buildings%5Cu2019%20footprints%20from%20a%20high-resolution%20digital%20elevation%20model%20%28DEM%29%2C%20using%20the%20existing%20label%20data%20from%20the%20open-access%20Microsoft%20building%20footprints%20%28MS-BF%29%20dataset.%20Comparison%20using%20an%20independent%2C%20manually%20labeled%20benchmark%20indicated%20the%20success%20of%20weak%20supervision%20learning%20as%20the%20quality%20of%20model%20prediction%20%5Bintersection%20over%20union%20%28IoU%29%3A%200.876%5D%20surpassed%20that%20of%20the%20original%20Microsoft%20data%20quality%20%28IoU%3A%200.672%29%20by%20approximately%2020%25.%20Moreover%2C%20adding%20extra%20channels%20such%20as%20elevation%20derivatives%2C%20slope%2C%20aspect%2C%20and%20profile%20curvatures%20did%20not%20enhance%20the%20weak%20learning%20process%20as%20the%20model%20learned%20directly%20from%20the%20original%20elevation%20data.%20Our%20results%20demonstrate%20the%20value%20of%20using%20existing%20data%20for%20training%20DL%20models%20even%20if%20they%20are%20noisy%20and%20incomplete.%22%2C%22date%22%3A%222022%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FLGRS.2022.3177160%22%2C%22ISSN%22%3A%221558-0571%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9780077%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A15%3A25Z%22%7D%7D%2C%7B%22key%22%3A%22DPNZVCWE%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Xie%20et%20al.%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EXie%2C%20X.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2076-3417%5C%2F12%5C%2F19%5C%2F9900%27%3EBuilding%20Function%20Recognition%20Using%20the%20Semi-Supervised%20Classification%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Building%20Function%20Recognition%20Using%20the%20Semi-Supervised%20Classification%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xuejing%22%2C%22lastName%22%3A%22Xie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yawen%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yongyang%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhanjun%22%2C%22lastName%22%3A%22He%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xueye%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiaoyun%22%2C%22lastName%22%3A%22Zheng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhong%22%2C%22lastName%22%3A%22Xie%22%7D%5D%2C%22abstractNote%22%3A%22The%20functional%20classification%20of%20buildings%20is%20important%20for%20creating%20and%20managing%20urban%20zones%20and%20assisting%20government%20departments.%20Building%20function%20recognition%20is%20incredibly%20valuable%20for%20wide%20applications%20ranging%20from%20the%20determination%20of%20energy%20demand.%20By%20aiming%20at%20the%20topic%20of%20urban%20function%20classification%2C%20a%20semi-supervised%20graph%20structure%20network%20combined%20unified%20message%20passing%20model%20was%20introduced.%20The%20data%20of%20this%20model%20include%20spatial%20location%20distribution%20of%20buildings%2C%20building%20characteristics%20and%20the%20information%20mined%20from%20points%20of%20interesting%20%28POIs%29.%20In%20order%20to%20extract%20the%20context%20information%2C%20each%20building%20was%20regarded%20as%20a%20graph%20node.%20Building%20characteristics%20and%20corresponding%20POIs%20information%20were%20embedded%20to%20mine%20the%20building%20function%20by%20the%20graph%20convolutional%20neural%20network.%20When%20training%20the%20model%2C%20several%20node%20labels%20in%20the%20graph%20were%20masked%2C%20and%20then%20these%20labels%20were%20predicted%20by%20the%20trained%20model%20so%20that%20this%20work%20could%20take%20full%20advantage%20of%20the%20node%20label%20and%20the%20feature%20information%20of%20all%20nodes%20in%20both%20the%20training%20and%20prediction%20stages.%20Quasi-experiments%20proved%20that%20the%20proposed%20method%20for%20building%20function%20classification%20using%20multi-source%20data%20enables%20the%20model%20to%20capture%20more%20meaningful%20information%20with%20limited%20labels%2C%20and%20it%20achieves%20better%20function%20classification%20results.%22%2C%22date%22%3A%222022%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fapp12199900%22%2C%22ISSN%22%3A%222076-3417%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2076-3417%5C%2F12%5C%2F19%5C%2F9900%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T19%3A04%3A45Z%22%7D%7D%2C%7B%22key%22%3A%228TEGB7UN%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Schn%5Cu00fcrer%20et%20al.%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ESchn%5Cu00fcrer%2C%20R.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F23729333.2021.1949087%27%3EInstance%20Segmentation%2C%20Body%20Part%20Parsing%2C%20and%20Pose%20Estimation%20of%20Human%20Figures%20in%20Pictorial%20Maps%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Instance%20Segmentation%2C%20Body%20Part%20Parsing%2C%20and%20Pose%20Estimation%20of%20Human%20Figures%20in%20Pictorial%20Maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Raimund%22%2C%22lastName%22%3A%22Schn%5Cu00fcrer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22A.%20Cengiz%22%2C%22lastName%22%3A%22%5Cu00d6ztireli%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Magnus%22%2C%22lastName%22%3A%22Heitzler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ren%5Cu00e9%22%2C%22lastName%22%3A%22Sieber%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22In%20recent%20years%2C%20convolutional%20neural%20networks%20%28CNNs%29%20have%20been%20applied%20successfully%20to%20recognise%20persons%2C%20their%20body%20parts%20and%20pose%20keypoints%20in%20photos%20and%20videos.%20The%20transfer%20of%20these%20techniques%20to%20artificially%20created%20images%20is%20rather%20unexplored%2C%20though%20challenging%20since%20these%20images%20are%20drawn%20in%20different%20styles%2C%20body%20proportions%2C%20and%20levels%20of%20abstraction.%20In%20this%20work%2C%20we%20study%20these%20problems%20on%20the%20basis%20of%20pictorial%20maps%20where%20we%20identify%20included%20human%20figures%20with%20two%20consecutive%20CNNs%3A%20We%20first%20segment%20individual%20figures%20with%20Mask%20R-CNN%2C%20and%20then%20parse%20their%20body%20parts%20and%20estimate%20their%20poses%20simultaneously%20with%20four%20different%20UNet%2B%2B%20versions.%20We%20train%20the%20CNNs%20with%20a%20mixture%20of%20real%20persons%20and%20synthetic%20figures%20and%20compare%20the%20results%20with%20manually%20annotated%20test%20datasets%20consisting%20of%20pictorial%20figures.%20By%20varying%20the%20training%20datasets%20and%20the%20CNN%20configurations%2C%20we%20were%20able%20to%20improve%20the%20original%20Mask%20R-CNN%20model%20and%20we%20achieved%20moderately%20satisfying%20results%20with%20the%20UNet%2B%2B%20versions.%20The%20extracted%20figures%20may%20be%20used%20for%20animation%20and%20storytelling%20and%20may%20be%20relevant%20for%20the%20analysis%20of%20historic%20and%20contemporary%20maps.%22%2C%22date%22%3A%222022%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F23729333.2021.1949087%22%2C%22ISSN%22%3A%222372-9333%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F23729333.2021.1949087%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A21%3A31Z%22%7D%7D%2C%7B%22key%22%3A%22LDDW4JVA%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wu%20et%20al.%22%2C%22parsedDate%22%3A%222021-12%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EWu%2C%20J.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F10%5C%2F12%5C%2F831%27%3EAn%20Automatic%20Extraction%20Method%20for%20Hatched%20Residential%20Areas%20in%20Raster%20Maps%20Based%20on%20Multi-Scale%20Feature%20Fusion%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22An%20Automatic%20Extraction%20Method%20for%20Hatched%20Residential%20Areas%20in%20Raster%20Maps%20Based%20on%20Multi-Scale%20Feature%20Fusion%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jianhua%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiaqi%22%2C%22lastName%22%3A%22Xiong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yu%22%2C%22lastName%22%3A%22Zhao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiang%22%2C%22lastName%22%3A%22Hu%22%7D%5D%2C%22abstractNote%22%3A%22Extracting%20the%20residential%20areas%20from%20digital%20raster%20maps%20is%20beneficial%20for%20research%20on%20land%20use%20change%20analysis%20and%20land%20quality%20assessment.%20In%20traditional%20methods%20for%20extracting%20residential%20areas%20in%20raster%20maps%2C%20parameters%20must%20be%20set%20manually%3B%20these%20methods%20also%20suffer%20from%20low%20extraction%20accuracy%20and%20inefficiency.%20Therefore%2C%20we%20have%20proposed%20an%20automatic%20method%20for%20extracting%20the%20hatched%20residential%20areas%20from%20raster%20maps%20based%20on%20a%20multi-scale%20U-Net%20and%20fully%20connected%20conditional%20random%20fields.%20The%20experimental%20results%20showed%20that%20the%20model%20that%20was%20based%20on%20a%20multi-scale%20U-Net%20with%20fully%20connected%20conditional%20random%20fields%20achieved%20scores%20of%2097.05%25%20in%20Dice%2C%2094.26%25%20in%20Intersection%20over%20Union%2C%2094.92%25%20in%20recall%2C%2093.52%25%20in%20precision%20and%2099.52%25%20in%20accuracy.%20Compared%20to%20the%20FCN-8s%2C%20the%20five%20metrics%20increased%20by%201.47%25%2C%202.72%25%2C%201.07%25%2C%204.56%25%20and%200.26%25%2C%20respectively%20and%20compared%20to%20the%20U-Net%2C%20they%20increased%20by%200.84%25%2C%201.56%25%2C%203.00%25%2C%200.65%25%20and%200.13%25%2C%20respectively.%20Our%20method%20also%20outperformed%20the%20Gabor%20filter-based%20algorithm%20in%20the%20number%20of%20identified%20objects%20and%20the%20accuracy%20of%20object%20contour%20locations.%20Furthermore%2C%20we%20were%20able%20to%20extract%20all%20of%20the%20hatched%20residential%20areas%20from%20a%20sheet%20of%20raster%20map.%20These%20results%20demonstrate%20that%20our%20method%20has%20high%20accuracy%20in%20object%20recognition%20and%20contour%20position%2C%20thereby%20providing%20a%20new%20method%20with%20strong%20potential%20for%20the%20extraction%20of%20hatched%20residential%20areas.%22%2C%22date%22%3A%222021%5C%2F12%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi10120831%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F10%5C%2F12%5C%2F831%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A46%3A14Z%22%7D%7D%2C%7B%22key%22%3A%22XJM44F3C%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Schn%5Cu00fcrer%20et%20al.%22%2C%22parsedDate%22%3A%222021-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ESchn%5Cu00fcrer%2C%20R.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F00087041.2020.1738112%27%3EDetection%20of%20Pictorial%20Map%20Objects%20with%20Convolutional%20Neural%20Networks%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Detection%20of%20Pictorial%20Map%20Objects%20with%20Convolutional%20Neural%20Networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Raimund%22%2C%22lastName%22%3A%22Schn%5Cu00fcrer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ren%5Cu00e9%22%2C%22lastName%22%3A%22Sieber%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jost%22%2C%22lastName%22%3A%22Schmid-Lanter%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22A.%20Cengiz%22%2C%22lastName%22%3A%22%5Cu00d6ztireli%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22In%20this%20work%2C%20realistically%20drawn%20objects%20are%20identified%20on%20digital%20maps%20by%20convolutional%20neural%20networks.%20For%20the%20first%20two%20experiments%2C%206200%20images%20were%20retrieved%20from%20Pinterest.%20While%20alternating%20image%20input%20options%2C%20two%20binary%20classifiers%20based%20on%20Xception%20and%20InceptionResNetV2%20were%20trained%20to%20separate%20maps%20and%20pictorial%20maps.%20Results%20showed%20that%20the%20accuracy%20is%2095%5Cu201397%25%20to%20distinguish%20maps%20from%20other%20images%2C%20whereas%20maps%20with%20pictorial%20objects%20are%20correctly%20classified%20at%20rates%20of%2087%5Cu201392%25.%20For%20a%20third%20experiment%2C%20bounding%20boxes%20of%203200%20sailing%20ships%20were%20annotated%20in%20historic%20maps%20from%20different%20digital%20libraries.%20Faster%20R-CNN%20and%20RetinaNet%20were%20compared%20to%20determine%20the%20box%20coordinates%2C%20while%20adjusting%20anchor%20scales%20and%20examining%20configurations%20for%20small%20objects.%20A%20resulting%20average%20precision%20of%2032%25%20was%20obtained%20for%20Faster%20R-CNN%20and%20of%2036%25%20for%20RetinaNet.%20Research%20outcomes%20are%20relevant%20for%20trawling%20map%20images%20on%20the%20Internet%20and%20for%20enhancing%20the%20advanced%20search%20of%20digital%20map%20catalogues.%22%2C%22date%22%3A%222021-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F00087041.2020.1738112%22%2C%22ISSN%22%3A%220008-7041%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F00087041.2020.1738112%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A38%3A03Z%22%7D%7D%2C%7B%22key%22%3A%223MEHB5PS%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Chen%20et%20al.%22%2C%22parsedDate%22%3A%222021%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EChen%2C%20Y.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Flink.springer.com%5C%2Fchapter%5C%2F10.1007%5C%2F978-3-030-86337-1_34%27%3EVectorization%20of%20Historical%20Maps%20Using%20Deep%20Edge%20Filtering%20and%20Closed%20Shape%20Extraction%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Vectorization%20of%20Historical%20Maps%20Using%20Deep%20Edge%20Filtering%20and%20Closed%20Shape%20Extraction%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yizi%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Edwin%22%2C%22lastName%22%3A%22Carlinet%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Joseph%22%2C%22lastName%22%3A%22Chazalon%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cl%5Cu00e9ment%22%2C%22lastName%22%3A%22Mallet%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bertrand%22%2C%22lastName%22%3A%22Dum%5Cu00e9nieu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Julien%22%2C%22lastName%22%3A%22Perret%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Josep%22%2C%22lastName%22%3A%22Llad%5Cu00f3s%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Daniel%22%2C%22lastName%22%3A%22Lopresti%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Seiichi%22%2C%22lastName%22%3A%22Uchida%22%7D%5D%2C%22abstractNote%22%3A%22Maps%20have%20been%20a%20unique%20source%20of%20knowledge%20for%20centuries.%20Such%20historical%20documents%20provide%20invaluable%20information%20for%20analyzing%20the%20complex%20spatial%20transformation%20of%20landscapes%20over%20important%20time%20frames.%20This%20is%20particularly%20true%20for%20urban%20areas%20that%20encompass%20multiple%20interleaved%20research%20domains%20%28social%20sciences%2C%20economy%2C%20etc.%29.%20The%20large%20amount%20and%20significant%20diversity%20of%20map%20sources%20call%20for%20automatic%20image%20processing%20techniques%20in%20order%20to%20extract%20the%20relevant%20objects%20under%20a%20vectorial%20shape.%20The%20complexity%20of%20maps%20%28text%2C%20noise%2C%20digitization%20artifacts%2C%20etc.%29%20has%20hindered%20the%20capacity%20of%20proposing%20a%20versatile%20and%20efficient%20raster-to-vector%20approaches%20for%20decades.%20We%20propose%20a%20learnable%2C%20reproducible%2C%20and%20reusable%20solution%20for%20the%20automatic%20transformation%20of%20raster%20maps%20into%20vector%20objects%20%28building%20blocks%2C%20streets%2C%20rivers%29.%20It%20is%20built%20upon%20the%20complementary%20strength%20of%20mathematical%20morphology%20and%20convolutional%20neural%20networks%20through%20efficient%20edge%20filtering.%20Evenmore%2C%20we%20modify%20ConnNet%20and%20combine%20with%20deep%20edge%20filtering%20architecture%20to%20make%20use%20of%20pixel%20connectivity%20information%20and%20built%20an%20end-to-end%20system%20without%20requiring%20any%20post-processing%20techniques.%20In%20this%20paper%2C%20we%20focus%20on%20the%20comprehensive%20benchmark%20on%20various%20architectures%20on%20multiple%20datasets%20coupled%20with%20a%20novel%20vectorization%20step.%20Our%20experimental%20results%20on%20a%20new%20public%20dataset%20using%20COCO%20Panoptic%20metric%20exhibit%20very%20encouraging%20results%20confirmed%20by%20a%20qualitative%20analysis%20of%20the%20success%20and%20failure%20cases%20of%20our%20approach.%20Code%2C%20dataset%2C%20results%20and%20extra%20illustrations%20are%20freely%20available%20at%20https%3A%5C%2F%5C%2Fgithub.com%5C%2Fsoduco%5C%2FICDAR-2021-Vectorization.%22%2C%22date%22%3A%222021%22%2C%22proceedingsTitle%22%3A%22Document%20Analysis%20and%20Recognition%20%5Cu2013%20ICDAR%202021%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2F978-3-030-86337-1_34%22%2C%22ISBN%22%3A%22978-3-030-86337-1%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Flink.springer.com%5C%2Fchapter%5C%2F10.1007%5C%2F978-3-030-86337-1_34%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A16%3A25Z%22%7D%7D%2C%7B%22key%22%3A%22EC33IKWA%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Chen%20et%20al.%22%2C%22parsedDate%22%3A%222021%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EChen%2C%20Y.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Flink.springer.com%5C%2Fchapter%5C%2F10.1007%5C%2F978-3-030-76657-3_5%27%3ECombining%20Deep%20Learning%20and%20Mathematical%20Morphology%20for%20Historical%20Map%20Segmentation%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Combining%20Deep%20Learning%20and%20Mathematical%20Morphology%20for%20Historical%20Map%20Segmentation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yizi%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Edwin%22%2C%22lastName%22%3A%22Carlinet%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Joseph%22%2C%22lastName%22%3A%22Chazalon%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cl%5Cu00e9ment%22%2C%22lastName%22%3A%22Mallet%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bertrand%22%2C%22lastName%22%3A%22Dum%5Cu00e9nieu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Julien%22%2C%22lastName%22%3A%22Perret%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Joakim%22%2C%22lastName%22%3A%22Lindblad%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Filip%22%2C%22lastName%22%3A%22Malmberg%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Nata%5Cu0161a%22%2C%22lastName%22%3A%22Sladoje%22%7D%5D%2C%22abstractNote%22%3A%22The%20digitization%20of%20historical%20maps%20enables%20the%20study%20of%20ancient%2C%20fragile%2C%20unique%2C%20and%20hardly%20accessible%20information%20sources.%20Main%20map%20features%20can%20be%20retrieved%20and%20tracked%20through%20the%20time%20for%20subsequent%20thematic%20analysis.%20The%20goal%20of%20this%20work%20is%20the%20vectorization%20step%2C%20i.e.%2C%20the%20extraction%20of%20vector%20shapes%20of%20the%20objects%20of%20interest%20from%20raster%20images%20of%20maps.%20We%20are%20particularly%20interested%20in%20closed%20shape%20detection%20such%20as%20buildings%2C%20building%20blocks%2C%20gardens%2C%20rivers%2C%20etc.%20in%20order%20to%20monitor%20their%20temporal%20evolution.%20Historical%20map%20images%20present%20significant%20pattern%20recognition%20challenges.%20The%20extraction%20of%20closed%20shapes%20by%20using%20traditional%20Mathematical%20Morphology%20%28MM%29%20is%20highly%20challenging%20due%20to%20the%20overlapping%20of%20multiple%20map%20features%20and%20texts.%20Moreover%2C%20state-of-the-art%20Convolutional%20Neural%20Networks%20%28CNN%29%20are%20perfectly%20designed%20for%20content%20image%20filtering%20but%20provide%20no%20guarantee%20about%20closed%20shape%20detection.%20Also%2C%20the%20lack%20of%20textural%20and%20color%20information%20of%20historical%20maps%20makes%20it%20hard%20for%20CNN%20to%20detect%20shapes%20that%20are%20represented%20by%20only%20their%20boundaries.%20Our%20contribution%20is%20a%20pipeline%20that%20combines%20the%20strengths%20of%20CNN%20%28efficient%20edge%20detection%20and%20filtering%29%20and%20MM%20%28guaranteed%20extraction%20of%20closed%20shapes%29%20in%20order%20to%20achieve%20such%20a%20task.%20The%20evaluation%20of%20our%20approach%20on%20a%20public%20dataset%20shows%20its%20effectiveness%20for%20extracting%20the%20closed%20boundaries%20of%20objects%20in%20historical%20maps.%22%2C%22date%22%3A%222021%22%2C%22proceedingsTitle%22%3A%22Discrete%20Geometry%20and%20Mathematical%20Morphology%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2F978-3-030-76657-3_5%22%2C%22ISBN%22%3A%22978-3-030-76657-3%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Flink.springer.com%5C%2Fchapter%5C%2F10.1007%5C%2F978-3-030-76657-3_5%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A15%3A52Z%22%7D%7D%2C%7B%22key%22%3A%22EVF393MF%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Petitpierre%20et%20al.%22%2C%22parsedDate%22%3A%222021%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EPetitpierre%2C%20R.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fceur-ws.org%5C%2FVol-2989%5C%2F%27%3EGeneric%20Semantic%20Segmentation%20of%20Historical%20Maps%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Generic%20Semantic%20Segmentation%20of%20Historical%20Maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22R%5Cu00e9mi%22%2C%22lastName%22%3A%22Petitpierre%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fr%5Cu00e9d%5Cu00e9ric%22%2C%22lastName%22%3A%22Kaplan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Isabella%22%2C%22lastName%22%3A%22di%20Lenardo%22%7D%5D%2C%22abstractNote%22%3A%22Research%20in%20automatic%20map%20processing%20is%20largely%20focused%20on%20homogeneous%20corpora%20or%20even%20individual%20maps%2C%20leading%20to%20inflexible%20models.%20Based%20on%20two%20new%20corpora%2C%20the%20first%20one%20centered%20on%20maps%20of%20Paris%20and%20the%20second%20one%20gathering%20maps%20of%20cities%20from%20all%20over%20the%20world%2C%20we%20present%20a%20method%20for%20computing%20the%20figurative%20diversity%20of%20cartographic%20collections.%20In%20a%20second%20step%2C%20we%20discuss%20the%20actual%20opportunities%20for%20CNN-based%20semantic%20segmentation%20of%20historical%20city%20maps.%20Through%20several%20experiments%2C%20we%20analyze%20the%20impact%20of%20figurative%20and%20cultural%20diversity%20on%20the%20segmentation%20performance.%20Finally%2C%20we%20highlight%20the%20potential%20for%20large-scale%20and%20generic%20algorithms.%20Training%20data%20and%20code%20of%20the%20described%20algorithms%20are%20made%20open-source%20and%20published%20with%20this%20article.%22%2C%22date%22%3A%222021%22%2C%22proceedingsTitle%22%3A%22CEUR%20Workshop%20Proceedings%22%2C%22conferenceName%22%3A%22CHR%202021%3A%20Computational%20Humanities%20Research%20Conference%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fceur-ws.org%5C%2FVol-2989%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A34%3A57Z%22%7D%7D%2C%7B%22key%22%3A%22PC477AVJ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Garcia-Molsosa%20et%20al.%22%2C%22parsedDate%22%3A%222021%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EGarcia-Molsosa%2C%20A.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1002%5C%2Farp.1807%27%3EPotential%20of%20deep%20learning%20segmentation%20for%20the%20extraction%20of%20archaeological%20features%20from%20historical%20map%20series%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Potential%20of%20deep%20learning%20segmentation%20for%20the%20extraction%20of%20archaeological%20features%20from%20historical%20map%20series%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Arnau%22%2C%22lastName%22%3A%22Garcia-Molsosa%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hector%20A.%22%2C%22lastName%22%3A%22Orengo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dan%22%2C%22lastName%22%3A%22Lawrence%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Graham%22%2C%22lastName%22%3A%22Philip%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kristen%22%2C%22lastName%22%3A%22Hopper%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cameron%20A.%22%2C%22lastName%22%3A%22Petrie%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20maps%20present%20a%20unique%20depiction%20of%20past%20landscapes%2C%20providing%20evidence%20for%20a%20wide%20range%20of%20information%20such%20as%20settlement%20distribution%2C%20past%20land%20use%2C%20natural%20resources%2C%20transport%20networks%2C%20toponymy%20and%20other%20natural%20and%20cultural%20data%20within%20an%20explicitly%20spatial%20context.%20Maps%20produced%20before%20the%20expansion%20of%20large-scale%20mechanized%20agriculture%20reflect%20a%20landscape%20that%20is%20lost%20today.%20Of%20particular%20interest%20to%20us%20is%20the%20great%20quantity%20of%20archaeologically%20relevant%20information%20that%20these%20maps%20recorded%2C%20both%20deliberately%20and%20incidentally.%20Despite%20the%20importance%20of%20the%20information%20they%20contain%2C%20researchers%20have%20only%20recently%20begun%20to%20automatically%20digitize%20and%20extract%20data%20from%20such%20maps%20as%20coherent%20information%2C%20rather%20than%20manually%20examine%20a%20raster%20image.%20However%2C%20these%20new%20approaches%20have%20focused%20on%20specific%20types%20of%20information%20that%20cannot%20be%20used%20directly%20for%20archaeological%20or%20heritage%20purposes.%20This%20paper%20provides%20a%20proof%20of%20concept%20of%20the%20application%20of%20deep%20learning%20techniques%20to%20extract%20archaeological%20information%20from%20historical%20maps%20in%20an%20automated%20manner.%20Early%20twentieth%20century%20colonial%20map%20series%20have%20been%20chosen%2C%20as%20they%20provide%20enough%20time%20depth%20to%20avoid%20many%20recent%20large-scale%20landscape%20modifications%20and%20cover%20very%20large%20areas%20%28comprising%20several%20countries%29.%20The%20use%20of%20common%20symbology%20and%20conventions%20enhance%20the%20applicability%20of%20the%20method.%20The%20results%20show%20deep%20learning%20to%20be%20an%20efficient%20tool%20for%20the%20recovery%20of%20georeferenced%2C%20archaeologically%20relevant%20information%20that%20is%20represented%20as%20conventional%20signs%2C%20line-drawings%20and%20text%20in%20historical%20maps.%20The%20method%20can%20provide%20excellent%20results%20when%20an%20adequate%20training%20dataset%20has%20been%20gathered%20and%20is%20therefore%20at%20its%20best%20when%20applied%20to%20the%20large%20map%20series%20that%20can%20supply%20such%20information.%20The%20deep%20learning%20approaches%20described%20here%20open%20up%20the%20possibility%20to%20map%20sites%20and%20features%20across%20entire%20map%20series%20much%20more%20quickly%20and%20coherently%20than%20other%20available%20methods%2C%20opening%20up%20the%20potential%20to%20reconstruct%20archaeological%20landscapes%20at%20continental%20scales.%22%2C%22date%22%3A%222021%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1002%5C%2Farp.1807%22%2C%22ISSN%22%3A%221099-0763%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1002%5C%2Farp.1807%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A57%3A19Z%22%7D%7D%2C%7B%22key%22%3A%22KIH8G26J%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Maxwell%20et%20al.%22%2C%22parsedDate%22%3A%222020-01%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EMaxwell%2C%20A.E.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2072-4292%5C%2F12%5C%2F24%5C%2F4145%27%3ESemantic%20Segmentation%20Deep%20Learning%20for%20Extracting%20Surface%20Mine%20Extents%20from%20Historic%20Topographic%20Maps%3C%5C%2Fa%3E.%202020%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Semantic%20Segmentation%20Deep%20Learning%20for%20Extracting%20Surface%20Mine%20Extents%20from%20Historic%20Topographic%20Maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Aaron%20E.%22%2C%22lastName%22%3A%22Maxwell%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michelle%20S.%22%2C%22lastName%22%3A%22Bester%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Luis%20A.%22%2C%22lastName%22%3A%22Guillen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Christopher%20A.%22%2C%22lastName%22%3A%22Ramezan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dennis%20J.%22%2C%22lastName%22%3A%22Carpinello%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yiting%22%2C%22lastName%22%3A%22Fan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Faith%20M.%22%2C%22lastName%22%3A%22Hartley%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shannon%20M.%22%2C%22lastName%22%3A%22Maynard%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jaimee%20L.%22%2C%22lastName%22%3A%22Pyron%22%7D%5D%2C%22abstractNote%22%3A%22Historic%20topographic%20maps%2C%20which%20are%20georeferenced%20and%20made%20publicly%20available%20by%20the%20United%20States%20Geological%20Survey%20%28USGS%29%20and%20the%20National%20Map%5Cu2019s%20Historical%20Topographic%20Map%20Collection%20%28HTMC%29%2C%20are%20a%20valuable%20source%20of%20historic%20land%20cover%20and%20land%20use%20%28LCLU%29%20information%20that%20could%20be%20used%20to%20expand%20the%20historic%20record%20when%20combined%20with%20data%20from%20moderate%20spatial%20resolution%20Earth%20observation%20missions.%20This%20is%20especially%20true%20for%20landscape%20disturbances%20that%20have%20a%20long%20and%20complex%20historic%20record%2C%20such%20as%20surface%20coal%20mining%20in%20the%20Appalachian%20region%20of%20the%20eastern%20United%20States.%20In%20this%20study%2C%20we%20investigate%20this%20specific%20mapping%20problem%20using%20modified%20UNet%20semantic%20segmentation%20deep%20learning%20%28DL%29%2C%20which%20is%20based%20on%20convolutional%20neural%20networks%20%28CNNs%29%2C%20and%20a%20large%20example%20dataset%20of%20historic%20surface%20mine%20disturbance%20extents%20from%20the%20USGS%20Geology%2C%20Geophysics%2C%20and%20Geochemistry%20Science%20Center%20%28GGGSC%29.%20The%20primary%20objectives%20of%20this%20study%20are%20to%20%281%29%20evaluate%20model%20generalization%20to%20new%20geographic%20extents%20and%20topographic%20maps%20and%20%282%29%20to%20assess%20the%20impact%20of%20training%20sample%20size%2C%20or%20the%20number%20of%20manually%20interpreted%20topographic%20maps%2C%20on%20model%20performance.%20Using%20data%20from%20the%20state%20of%20Kentucky%2C%20our%20findings%20suggest%20that%20DL%20semantic%20segmentation%20can%20detect%20surface%20mine%20disturbance%20features%20from%20topographic%20maps%20with%20a%20high%20level%20of%20accuracy%20%28Dice%20coefficient%20%3D%200.902%29%20and%20relatively%20balanced%20omission%20and%20commission%20error%20rates%20%28Precision%20%3D%200.891%2C%20Recall%20%3D%200.917%29.%20When%20the%20model%20is%20applied%20to%20new%20topographic%20maps%20in%20Ohio%20and%20Virginia%20to%20assess%20generalization%2C%20model%20performance%20decreases%3B%20however%2C%20performance%20is%20still%20strong%20%28Ohio%20Dice%20coefficient%20%3D%200.837%20and%20Virginia%20Dice%20coefficient%20%3D%200.763%29.%20Further%2C%20when%20reducing%20the%20number%20of%20topographic%20maps%20used%20to%20derive%20training%20image%20chips%20from%2084%20to%2015%2C%20model%20performance%20was%20only%20slightly%20reduced%2C%20suggesting%20that%20models%20that%20generalize%20well%20to%20new%20data%20and%20geographic%20extents%20may%20not%20require%20a%20large%20training%20set.%20We%20suggest%20the%20incorporation%20of%20DL%20semantic%20segmentation%20methods%20into%20applied%20workflows%20to%20decrease%20manual%20digitizing%20labor%20requirements%20and%20call%20for%20additional%20research%20associated%20with%20applying%20semantic%20segmentation%20methods%20to%20alternative%20cartographic%20representations%20to%20supplement%20research%20focused%20on%20multispectral%20image%20analysis%20and%20classification.%22%2C%22date%22%3A%222020%5C%2F1%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Frs12244145%22%2C%22ISSN%22%3A%222072-4292%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2072-4292%5C%2F12%5C%2F24%5C%2F4145%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A13%3A11Z%22%7D%7D%2C%7B%22key%22%3A%22L667SAEA%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Heitzler%20and%20Hurni%22%2C%22parsedDate%22%3A%222020%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EHeitzler%2C%20M.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1111%5C%2Ftgis.12610%27%3ECartographic%20reconstruction%20of%20building%20footprints%20from%20historical%20maps%3A%20A%20study%20on%20the%20Swiss%20Siegfried%20map%3C%5C%2Fa%3E.%202020%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Cartographic%20reconstruction%20of%20building%20footprints%20from%20historical%20maps%3A%20A%20study%20on%20the%20Swiss%20Siegfried%20map%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Magnus%22%2C%22lastName%22%3A%22Heitzler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22Extracting%20features%20from%20printed%20maps%20has%20been%20a%20challenge%20for%20decades%3B%20historical%20maps%20pose%20an%20even%20larger%20problem%20due%20to%20manual%2C%20inconsistent%20drawing%20or%20scribing%2C%20low%20printing%20quality%2C%20and%20geometrical%20distortions.%20In%20this%20article%2C%20a%20new%20workflow%20is%20introduced%2C%20consisting%20of%20a%20segmentation%20step%20and%20a%20vectorization%20step%20to%20acquire%20high-quality%20polygon%20representations%20of%20building%20footprints%20from%20the%20Siegfried%20map%20series.%20For%20segmentation%2C%20an%20ensemble%20of%20U-Nets%20is%20trained%2C%20yielding%20pixel-based%20predictions%20with%20an%20average%20intersection%20over%20union%20of%2088.2%25%20and%20an%20average%20precision%20of%2098.55%25.%20For%20vectorization%2C%20methods%20based%20on%20contour%20tracing%20and%20orientation-based%20clustering%20are%20proposed%20to%20approximate%20idealized%20polygonal%20representations.%20The%20workflow%20has%20been%20tested%20on%2010%20randomly%20selected%20map%20sheets%20from%20the%20Siegfried%20map%2C%20showing%20that%20the%20time%20required%20to%20manually%20correct%20these%20polygons%20drops%20to%20about%2045%20min%20per%20map%20sheet.%20Of%20this%20sample%2C%20approximately%2010%25%20of%20buildings%20required%20manual%20corrections.%20This%20workflow%20can%20serve%20as%20a%20blueprint%20for%20similar%20vectorization%20efforts.%22%2C%22date%22%3A%222020%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1111%5C%2Ftgis.12610%22%2C%22ISSN%22%3A%221467-9671%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1111%5C%2Ftgis.12610%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A04%3A43Z%22%7D%7D%5D%7D
Mai, G. et al. Towards general-purpose representation learning of polygonal geometries. 2023
Xydas, C. et al. Buildings Extraction from Historical Topographic Maps via a Deep Convolution Neural Network. 2022
Farmakis-Serebryakova, M. et al. Terrain Segmentation Using a U-Net for Improved Relief Shading. 2022
Soliman, A. et al. Weakly Supervised Segmentation of Buildings in Digital Elevation Models. 2022
Xie, X. et al. Building Function Recognition Using the Semi-Supervised Classification. 2022
Schnürer, R. et al. Instance Segmentation, Body Part Parsing, and Pose Estimation of Human Figures in Pictorial Maps. 2022
Schnürer, R. et al. Detection of Pictorial Map Objects with Convolutional Neural Networks. 2021
Chen, Y. et al. Vectorization of Historical Maps Using Deep Edge Filtering and Closed Shape Extraction. 2021
Chen, Y. et al. Combining Deep Learning and Mathematical Morphology for Historical Map Segmentation. 2021
Petitpierre, R. et al. Generic Semantic Segmentation of Historical Maps. 2021
Garcia-Molsosa, A. et al. Potential of deep learning segmentation for the extraction of archaeological features from historical map series. 2021
Maxwell, A.E. et al. Semantic Segmentation Deep Learning for Extracting Surface Mine Extents from Historic Topographic Maps. 2020
Feature Extraction (Labels)
5447768
feature extraction, labels
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%225KU6R9ZB%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Arundel%20et%20al.%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EArundel%2C%20S.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fpubs.er.usgs.gov%5C%2Fpublication%5C%2F70229393%27%3EDeep%20learning%20detection%20and%20recognition%20of%20spot%20elevations%20on%20historic%20topographic%20maps%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep%20learning%20detection%20and%20recognition%20of%20spot%20elevations%20on%20historic%20topographic%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Samantha%22%2C%22lastName%22%3A%22Arundel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Trenton%20P.%22%2C%22lastName%22%3A%22Morgan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Philip%20T.%22%2C%22lastName%22%3A%22Thiem%22%7D%5D%2C%22abstractNote%22%3A%22Some%20information%20contained%20in%20historical%20topographic%20maps%20has%20yet%20to%20be%20captured%20digitally%2C%20which%20limits%20the%20ability%20to%20automatically%20query%20such%20data.%20For%20example%2C%20U.S.%20Geological%20Survey%5Cu2019s%20historical%20topographic%20map%20collection%20%28HTMC%29%20displays%20millions%20of%20spot%20elevations%20at%20locations%20that%20were%20carefully%20chosen%20to%20best%20represent%20the%20terrain%20at%20the%20time.%20Although%20research%20has%20attempted%20to%20reproduce%20these%20data%20points%2C%20it%20has%20proven%20inadequate%20to%20automatically%20detect%20and%20recognize%20spot%20elevations%20in%20the%20HTMC.%20We%20propose%20a%20deep%20learning%20workflow%20pretrained%20using%20large%20benchmark%20text%20datasets.%20To%20these%20datasets%20we%20add%20manually%20crafted%20training%20image%5C%2Flabel%20pairs%2C%20and%20test%20how%20many%20are%20required%20to%20improve%20prediction%20accuracy.%20We%20find%20that%20the%20initial%20model%2C%20pretrained%20solely%20with%20benchmark%20data%2C%20fails%20to%20predict%20any%20HTMC%20spot%20elevations%20correctly%2C%20whereas%20the%20addition%20of%20just%2050%20custom%20image%5C%2Flabel%20pairs%20increases%20the%20predictive%20ability%20by%20~50%25%2C%20and%20the%20inclusion%20of%20350%20data%20pairs%20increased%20performance%20by...%22%2C%22date%22%3A%222022%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.3389%5C%2Ffenvs.2022.804155%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fpubs.er.usgs.gov%5C%2Fpublication%5C%2F70229393%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A58%3A18Z%22%7D%7D%2C%7B%22key%22%3A%22H32PDRGL%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Can%20and%20Erdem%20Kabadayi%22%2C%22parsedDate%22%3A%222021-09-05%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ECan%2C%20Y.S.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3476887.3476904%27%3EText%20Detection%20and%20Recognition%20by%20using%20CNNs%20in%20the%20Austro-Hungarian%20Historical%20Military%20Mapping%20Survey%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Text%20Detection%20and%20Recognition%20by%20using%20CNNs%20in%20the%20Austro-Hungarian%20Historical%20Military%20Mapping%20Survey%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yekta%20Said%22%2C%22lastName%22%3A%22Can%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mustafa%22%2C%22lastName%22%3A%22Erdem%20Kabadayi%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20maps%20include%20precious%20data%20about%20historical%2C%20geographical%20and%20economic%20perspectives%20of%20a%20period.%20However%2C%20several%20unique%20challenges%20and%20opportunities%20accompany%20historical%20maps%20compared%20to%20modern%20maps%2C%20such%20as%20low-quality%20images%2C%20degraded%20manuscripts%20and%20the%20huge%20quantity%20of%20non-annotated%20digital%20map%20collections.%20In%20the%20recent%20decade%2C%20Convolutional%20Neural%20Networks%20%28CNNs%29%20are%20applied%20to%20solve%20various%20image%20processing%20problems%2C%20but%20they%20need%20enormous%20annotated%20data%20to%20have%20accurate%20results.%20In%20this%20work%2C%20we%20annotated%20text%20regions%20of%20the%20Third%20Military%20Mapping%20Survey%20of%20Austria-Hungary%20historical%20map%20series%20conducted%20between%201884%20and%201918%20manually%20and%20made%20them%20accessible%20for%20researchers.%20Then%2C%20we%20detected%20the%20pixel-wise%20positions%20of%20text%20regions%20by%20employing%20the%20deep%20neural%20network%20architecture%20and%20recognized%20them%20with%20encouraging%20error%20rates.%22%2C%22date%22%3A%22September%205%2C%202021%22%2C%22proceedingsTitle%22%3A%22The%206th%20International%20Workshop%20on%20Historical%20Document%20Imaging%20and%20Processing%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3476887.3476904%22%2C%22ISBN%22%3A%22978-1-4503-8690-6%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3476887.3476904%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A42%3A36Z%22%7D%7D%2C%7B%22key%22%3A%22PC477AVJ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Garcia-Molsosa%20et%20al.%22%2C%22parsedDate%22%3A%222021%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EGarcia-Molsosa%2C%20A.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1002%5C%2Farp.1807%27%3EPotential%20of%20deep%20learning%20segmentation%20for%20the%20extraction%20of%20archaeological%20features%20from%20historical%20map%20series%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Potential%20of%20deep%20learning%20segmentation%20for%20the%20extraction%20of%20archaeological%20features%20from%20historical%20map%20series%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Arnau%22%2C%22lastName%22%3A%22Garcia-Molsosa%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hector%20A.%22%2C%22lastName%22%3A%22Orengo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dan%22%2C%22lastName%22%3A%22Lawrence%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Graham%22%2C%22lastName%22%3A%22Philip%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kristen%22%2C%22lastName%22%3A%22Hopper%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cameron%20A.%22%2C%22lastName%22%3A%22Petrie%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20maps%20present%20a%20unique%20depiction%20of%20past%20landscapes%2C%20providing%20evidence%20for%20a%20wide%20range%20of%20information%20such%20as%20settlement%20distribution%2C%20past%20land%20use%2C%20natural%20resources%2C%20transport%20networks%2C%20toponymy%20and%20other%20natural%20and%20cultural%20data%20within%20an%20explicitly%20spatial%20context.%20Maps%20produced%20before%20the%20expansion%20of%20large-scale%20mechanized%20agriculture%20reflect%20a%20landscape%20that%20is%20lost%20today.%20Of%20particular%20interest%20to%20us%20is%20the%20great%20quantity%20of%20archaeologically%20relevant%20information%20that%20these%20maps%20recorded%2C%20both%20deliberately%20and%20incidentally.%20Despite%20the%20importance%20of%20the%20information%20they%20contain%2C%20researchers%20have%20only%20recently%20begun%20to%20automatically%20digitize%20and%20extract%20data%20from%20such%20maps%20as%20coherent%20information%2C%20rather%20than%20manually%20examine%20a%20raster%20image.%20However%2C%20these%20new%20approaches%20have%20focused%20on%20specific%20types%20of%20information%20that%20cannot%20be%20used%20directly%20for%20archaeological%20or%20heritage%20purposes.%20This%20paper%20provides%20a%20proof%20of%20concept%20of%20the%20application%20of%20deep%20learning%20techniques%20to%20extract%20archaeological%20information%20from%20historical%20maps%20in%20an%20automated%20manner.%20Early%20twentieth%20century%20colonial%20map%20series%20have%20been%20chosen%2C%20as%20they%20provide%20enough%20time%20depth%20to%20avoid%20many%20recent%20large-scale%20landscape%20modifications%20and%20cover%20very%20large%20areas%20%28comprising%20several%20countries%29.%20The%20use%20of%20common%20symbology%20and%20conventions%20enhance%20the%20applicability%20of%20the%20method.%20The%20results%20show%20deep%20learning%20to%20be%20an%20efficient%20tool%20for%20the%20recovery%20of%20georeferenced%2C%20archaeologically%20relevant%20information%20that%20is%20represented%20as%20conventional%20signs%2C%20line-drawings%20and%20text%20in%20historical%20maps.%20The%20method%20can%20provide%20excellent%20results%20when%20an%20adequate%20training%20dataset%20has%20been%20gathered%20and%20is%20therefore%20at%20its%20best%20when%20applied%20to%20the%20large%20map%20series%20that%20can%20supply%20such%20information.%20The%20deep%20learning%20approaches%20described%20here%20open%20up%20the%20possibility%20to%20map%20sites%20and%20features%20across%20entire%20map%20series%20much%20more%20quickly%20and%20coherently%20than%20other%20available%20methods%2C%20opening%20up%20the%20potential%20to%20reconstruct%20archaeological%20landscapes%20at%20continental%20scales.%22%2C%22date%22%3A%222021%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1002%5C%2Farp.1807%22%2C%22ISSN%22%3A%221099-0763%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1002%5C%2Farp.1807%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A57%3A19Z%22%7D%7D%2C%7B%22key%22%3A%22H4S5JNQ7%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Weinman%20et%20al.%22%2C%22parsedDate%22%3A%222019-09%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EWeinman%2C%20J.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F8978121%27%3EDeep%20Neural%20Networks%20for%20Text%20Detection%20and%20Recognition%20in%20Historical%20Maps%3C%5C%2Fa%3E.%202019%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Deep%20Neural%20Networks%20for%20Text%20Detection%20and%20Recognition%20in%20Historical%20Maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jerod%22%2C%22lastName%22%3A%22Weinman%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ziwen%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ben%22%2C%22lastName%22%3A%22Gafford%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nathan%22%2C%22lastName%22%3A%22Gifford%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Abyaya%22%2C%22lastName%22%3A%22Lamsal%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Liam%22%2C%22lastName%22%3A%22Niehus-Staab%22%7D%5D%2C%22abstractNote%22%3A%22We%20introduce%20deep%20convolutional%20and%20recurrent%20neural%20networks%20for%20end-to-end%2C%20open-vocabulary%20text%20reading%20on%20historical%20maps.%20A%20text%20detection%20network%20predicts%20word%20bounding%20boxes%20at%20arbitrary%20orientations%20and%20scales.%20The%20detected%20word%20images%20are%20then%20normalized%20for%20a%20robust%20recognition%20network.%20Because%20accurate%20recognition%20requires%20large%20volumes%20of%20training%20data%20but%20manually%20labeled%20data%20is%20relatively%20scarce%2C%20we%20introduce%20a%20dynamic%20map%20text%20synthesizer%20providing%20a%20practically%20infinite%20stream%20of%20training%20data.%20Results%20are%20evaluated%20on%20a%20labeled%20data%20set%20of%2030%20maps%20featuring%20over%2030%2C000%20text%20labels.%22%2C%22date%22%3A%222019-09%22%2C%22proceedingsTitle%22%3A%222019%20International%20Conference%20on%20Document%20Analysis%20and%20Recognition%20%28ICDAR%29%22%2C%22conferenceName%22%3A%222019%20International%20Conference%20on%20Document%20Analysis%20and%20Recognition%20%28ICDAR%29%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FICDAR.2019.00149%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F8978121%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A16%3A58Z%22%7D%7D%5D%7D
Arundel, S. et al. Deep learning detection and recognition of spot elevations on historic topographic maps. 2022
Garcia-Molsosa, A. et al. Potential of deep learning segmentation for the extraction of archaeological features from historical map series. 2021
Weinman, J. et al. Deep Neural Networks for Text Detection and Recognition in Historical Maps. 2019
Feature Extraction (Fuzzy Elements)
5447768
feature extraction, fuzzy elements
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22MGQK4KHF%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wu%20et%20al.%22%2C%22parsedDate%22%3A%222022-06-01%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EWu%2C%20S.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.int-arch-photogramm-remote-sens-spatial-inf-sci.net%5C%2FXLIII-B4-2022%5C%2F189%5C%2F2022%5C%2F%27%3EA%20Closer%20Look%20at%20Segmentation%20Uncertainty%20of%20Scanned%20Historical%20Maps%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22A%20Closer%20Look%20at%20Segmentation%20Uncertainty%20of%20Scanned%20Historical%20Maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22S.%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22M.%22%2C%22lastName%22%3A%22Heitzler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22L.%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22Before%20modern%20earth%20observation%20techniques%20came%20into%20being%2C%20historical%20maps%20are%20almost%20the%20exclusive%20source%20to%20retrieve%20geo-spatial%20information%20on%20Earth.%20In%20recent%20years%2C%20the%20use%20of%20deep%20learning%20for%20historical%20map%20processing%20has%20gained%20popularity%20to%20replace%20tedious%20manual%20labor.%20However%2C%20neural%20networks%2C%20often%20referred%20to%20as%20%5Cu201cblack%20boxes%5Cu201d%2C%20usually%20generate%20predictions%20not%20well%20calibrated%20for%20indicating%20if%20the%20predictions%20are%20trustworthy.%20Considering%20the%20diversity%20in%20designs%20and%20the%20graphic%20defects%20of%20scanned%20historical%20maps%2C%20uncertainty%20estimates%20can%20benefit%20us%20in%20deciding%20when%20and%20how%20to%20trust%20the%20extracted%20information.%20In%20this%20paper%2C%20we%20compare%20the%20effectiveness%20of%20different%20uncertainty%20indicators%20for%20segmenting%20hydrological%20features%20from%20scanned%20historical%20maps.%20Those%20uncertainty%20indicators%20can%20be%20categorized%20into%20two%20major%20types%2C%20namely%20aleatoric%20uncertainty%20%28uncertainty%20in%20the%20observations%29%20and%20epistemic%20uncertainty%20%28uncertainty%20in%20the%20model%29.%20Specifically%2C%20we%20compare%20their%20effectiveness%20in%20indicating%20erroneous%20predictions%2C%20detecting%20noisy%20and%20out-of-distribution%20designs%2C%20and%20refining%20segmentation%20in%20a%20two-stage%20architecture.%22%2C%22date%22%3A%222022%5C%2F06%5C%2F01%22%2C%22proceedingsTitle%22%3A%22The%20International%20Archives%20of%20the%20Photogrammetry%2C%20Remote%20Sensing%20and%20Spatial%20Information%20Sciences%22%2C%22conferenceName%22%3A%22XXIV%20ISPRS%20Congress%20%5Cu201cImaging%20today%2C%20foreseeing%20tomorrow%5Cu201d%2C%20Commission%20IV%20-%202022%20edition%2C%206%26ndash%3B11%20June%202022%2C%20Nice%2C%20France%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.5194%5C%2Fisprs-archives-XLIII-B4-2022-189-2022%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.int-arch-photogramm-remote-sens-spatial-inf-sci.net%5C%2FXLIII-B4-2022%5C%2F189%5C%2F2022%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A09%3A01Z%22%7D%7D%2C%7B%22key%22%3A%22JAAXMZRV%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22St%5Cu00e5hl%20and%20Weimann%22%2C%22parsedDate%22%3A%222022-05-01%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ESt%5Cu00e5hl%2C%20N.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1574954122000061%27%3EIdentifying%20wetland%20areas%20in%20historical%20maps%20using%20deep%20convolutional%20neural%20networks%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Identifying%20wetland%20areas%20in%20historical%20maps%20using%20deep%20convolutional%20neural%20networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Niclas%22%2C%22lastName%22%3A%22St%5Cu00e5hl%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lisa%22%2C%22lastName%22%3A%22Weimann%22%7D%5D%2C%22abstractNote%22%3A%22The%20local%20environment%20and%20land%20usages%20have%20changed%20a%20lot%20during%20the%20past%20one%20hundred%20years.%20Historical%20documents%20and%20materials%20are%20crucial%20in%20understanding%20and%20following%20these%20changes.%20Historical%20documents%20are%2C%20therefore%2C%20an%20important%20piece%20in%20the%20understanding%20of%20the%20impact%20and%20consequences%20of%20land%20usage%20change.%20This%2C%20in%20turn%2C%20is%20important%20in%20the%20search%20of%20restoration%20projects%20that%20can%20be%20conducted%20to%20turn%20and%20reduce%20harmful%20and%20unsustainable%20effects%20originating%20from%20changes%20in%20the%20land-usage.%20This%20work%20extracts%20information%20on%20the%20historical%20location%20and%20geographical%20distribution%20of%20wetlands%2C%20from%20hand-drawn%20maps.%20This%20is%20achieved%20by%20using%20deep%20learning%20%28DL%29%2C%20and%20more%20specifically%20a%20convolutional%20neural%20network%20%28CNN%29.%20The%20CNN%20model%20is%20trained%20on%20a%20manually%20pre-labelled%20dataset%20on%20historical%20wetlands%20in%20the%20area%20of%20J%5Cu00f6nk%5Cu00f6ping%20county%20in%20Sweden.%20These%20are%20all%20extracted%20from%20the%20historical%20map%20called%20%5Cu201cGeneralstabskartan%5Cu201d.%20The%20presented%20CNN%20performs%20well%20and%20achieves%20a%20F1-score%20of%200.886%20when%20evaluated%20using%20a%2010-fold%20cross%20validation%20over%20the%20data.%20The%20trained%20models%20are%20additionally%20used%20to%20generate%20a%20GIS%20layer%20of%20the%20presumable%20historical%20geographical%20distribution%20of%20wetlands%20for%20the%20area%20that%20is%20depicted%20in%20the%20southern%20collection%20in%20Generalstabskartan%2C%20which%20covers%20the%20southern%20half%20of%20Sweden.%20This%20GIS%20layer%20is%20released%20as%20an%20open%20resource%20and%20can%20be%20freely%20used.%20To%20summarise%2C%20the%20presented%20results%20show%20that%20CNNs%20can%20be%20a%20useful%20tool%20in%20the%20extraction%20and%20digitalisation%20of%20non-textual%20information%20in%20historical%20documents%2C%20such%20as%20historical%20maps.%20A%20modern%20GIS%20material%20that%20can%20be%20used%20to%20further%20understand%20the%20past%20land-usage%20change%20is%20produced%20within%20this%20research.%20Previously%2C%20no%20material%20of%20this%20detail%20and%20extent%20have%20been%20available%2C%20due%20to%20the%20large%20effort%20needed%20to%20manually%20create%20such.%20However%2C%20with%20the%20presented%20resource%20better%20quantifications%20and%20estimations%20of%20historical%20wetlands%20that%20have%20been%20lost%20can%20be%20made.%22%2C%22date%22%3A%222022-05-01%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.ecoinf.2022.101557%22%2C%22ISSN%22%3A%221574-9541%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1574954122000061%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A41%3A15Z%22%7D%7D%2C%7B%22key%22%3A%22FPUTSDYJ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Uhl%20et%20al.%22%2C%22parsedDate%22%3A%222020%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EUhl%2C%20J.H.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F8946322%27%3EAutomated%20Extraction%20of%20Human%20Settlement%20Patterns%20From%20Historical%20Topographic%20Map%20Series%20Using%20Weakly%20Supervised%20Convolutional%20Neural%20Networks%3C%5C%2Fa%3E.%202020%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Automated%20Extraction%20of%20Human%20Settlement%20Patterns%20From%20Historical%20Topographic%20Map%20Series%20Using%20Weakly%20Supervised%20Convolutional%20Neural%20Networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Johannes%20H.%22%2C%22lastName%22%3A%22Uhl%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stefan%22%2C%22lastName%22%3A%22Leyk%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yao-Yi%22%2C%22lastName%22%3A%22Chiang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weiwei%22%2C%22lastName%22%3A%22Duan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Craig%20A.%22%2C%22lastName%22%3A%22Knoblock%22%7D%5D%2C%22abstractNote%22%3A%22Information%20extraction%20from%20historical%20maps%20represents%20a%20persistent%20challenge%20due%20to%20inferior%20graphical%20quality%20and%20the%20large%20data%20volume%20of%20digital%20map%20archives%2C%20which%20can%20hold%20thousands%20of%20digitized%20map%20sheets.%20Traditional%20map%20processing%20techniques%20typically%20rely%20on%20manually%20collected%20templates%20of%20the%20symbol%20of%20interest%2C%20and%20thus%20are%20not%20suitable%20for%20large-scale%20information%20extraction.%20In%20order%20to%20digitally%20preserve%20such%20large%20amounts%20of%20valuable%20retrospective%20geographic%20information%2C%20high%20levels%20of%20automation%20are%20required.%20Herein%2C%20we%20propose%20an%20automated%20machine-learning%20based%20framework%20to%20extract%20human%20settlement%20symbols%2C%20such%20as%20buildings%20and%20urban%20areas%20from%20historical%20topographic%20maps%20in%20the%20absence%20of%20training%20data%2C%20employing%20contemporary%20geospatial%20data%20as%20ancillary%20data%20to%20guide%20the%20collection%20of%20training%20samples.%20These%20samples%20are%20then%20used%20to%20train%20a%20convolutional%20neural%20network%20for%20semantic%20image%20segmentation%2C%20allowing%20for%20the%20extraction%20of%20human%20settlement%20patterns%20in%20an%20analysis-ready%20geospatial%20vector%20data%20format.%20We%20test%20our%20method%20on%20United%20States%20Geological%20Survey%20historical%20topographic%20maps%20published%20between%201893%20and%201954.%20The%20results%20are%20promising%2C%20indicating%20high%20degrees%20of%20completeness%20in%20the%20extracted%20settlement%20features%20%28i.e.%2C%20recall%20of%20up%20to%200.96%2C%20F-measure%20of%20up%20to%200.79%29%20and%20will%20guide%20the%20next%20steps%20to%20provide%20a%20fully%20automated%20operational%20approach%20for%20large-scale%20geographic%20feature%20extraction%20from%20a%20variety%20of%20historical%20map%20series.%20Moreover%2C%20the%20proposed%20framework%20provides%20a%20robust%20approach%20for%20the%20recognition%20of%20objects%20which%20are%20small%20in%20size%2C%20generalizable%20to%20many%20kinds%20of%20visual%20documents.%22%2C%22date%22%3A%222020%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FACCESS.2019.2963213%22%2C%22ISSN%22%3A%222169-3536%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F8946322%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A17%3A26Z%22%7D%7D%2C%7B%22key%22%3A%22JAJJMSS5%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Uhl%20et%20al.%22%2C%22parsedDate%22%3A%222018%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EUhl%2C%20J.H.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1049%5C%2Fiet-ipr.2018.5484%27%3ESpatialising%20uncertainty%20in%20image%20segmentation%20using%20weakly%20supervised%20convolutional%20neural%20networks%3A%20a%20case%20study%20from%20historical%20map%20processing%3C%5C%2Fa%3E.%202018%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Spatialising%20uncertainty%20in%20image%20segmentation%20using%20weakly%20supervised%20convolutional%20neural%20networks%3A%20a%20case%20study%20from%20historical%20map%20processing%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Johannes%20H.%22%2C%22lastName%22%3A%22Uhl%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stefan%22%2C%22lastName%22%3A%22Leyk%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yao-Yi%22%2C%22lastName%22%3A%22Chiang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weiwei%22%2C%22lastName%22%3A%22Duan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Craig%20A.%22%2C%22lastName%22%3A%22Knoblock%22%7D%5D%2C%22abstractNote%22%3A%22Convolutional%20neural%20networks%20%28CNNs%29%20such%20as%20encoder%5Cu2013decoder%20CNNs%20have%20increasingly%20been%20employed%20for%20semantic%20image%20segmentation%20at%20the%20pixel-level%20requiring%20pixel-level%20training%20labels%2C%20which%20are%20rarely%20available%20in%20real-world%20scenarios.%20In%20practice%2C%20weakly%20annotated%20training%20data%20at%20the%20image%20patch%20level%20are%20often%20used%20for%20pixel-level%20segmentation%20tasks%2C%20requiring%20further%20processing%20to%20obtain%20accurate%20results%2C%20mainly%20because%20the%20translation%20invariance%20of%20the%20CNN-based%20inference%20can%20turn%20into%20an%20impeding%20property%20leading%20to%20segmentation%20results%20of%20coarser%20spatial%20granularity%20compared%20with%20the%20original%20image.%20However%2C%20the%20inherent%20uncertainty%20in%20the%20segmented%20image%20and%20its%20relationships%20to%20translation%20invariance%2C%20CNN%20architecture%2C%20and%20classification%20scheme%20has%20never%20been%20analysed%20from%20an%20explicitly%20spatial%20perspective.%20Therefore%2C%20the%20authors%20propose%20measures%20to%20spatially%20visualise%20and%20assess%20class%20decision%20confidence%20based%20on%20spatially%20dense%20CNN%20predictions%2C%20resulting%20in%20continuous%20decision%20confidence%20surfaces.%20They%20find%20that%20such%20a%20visual-analytical%20method%20contributes%20to%20a%20better%20understanding%20of%20the%20spatial%20variability%20of%20class%20score%20confidence%20derived%20from%20weakly%20supervised%20CNN-based%20classifiers.%20They%20exemplify%20this%20approach%20by%20incorporating%20decision%20confidence%20surfaces%20into%20a%20processing%20chain%20for%20the%20extraction%20of%20human%20settlement%20features%20from%20historical%20map%20documents%20based%20on%20weakly%20annotated%20training%20data%20using%20different%20CNN%20architectures%20and%20classification%20schemes.%22%2C%22date%22%3A%222018%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1049%5C%2Fiet-ipr.2018.5484%22%2C%22ISSN%22%3A%221751-9667%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1049%5C%2Fiet-ipr.2018.5484%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A06%3A56Z%22%7D%7D%2C%7B%22key%22%3A%22FCVGC5FC%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Uhl%20et%20al.%22%2C%22parsedDate%22%3A%222017-07%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EUhl%2C%20J.H.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F8362084%27%3EExtracting%20human%20settlement%20footprint%20from%20historical%20topographic%20map%20series%20using%20context-based%20machine%20learning%3C%5C%2Fa%3E.%202017%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Extracting%20human%20settlement%20footprint%20from%20historical%20topographic%20map%20series%20using%20context-based%20machine%20learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Johannes%20H.%22%2C%22lastName%22%3A%22Uhl%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stefan%22%2C%22lastName%22%3A%22Leyk%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yao-Yi%22%2C%22lastName%22%3A%22Chiang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weiwei%22%2C%22lastName%22%3A%22Duan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Craig%20A.%22%2C%22lastName%22%3A%22Knoblock%22%7D%5D%2C%22abstractNote%22%3A%22Information%20extraction%20from%20historical%20maps%20represents%20a%20persistent%20challenge%20due%20to%20inferior%20graphical%20quality%20and%20large%20data%20volume%20in%20digital%20map%20archives%2C%20which%20can%20hold%20thousands%20of%20digitized%20map%20sheets.%20In%20this%20paper%2C%20we%20describe%20an%20approach%20to%20extract%20human%20settlement%20symbols%20in%20United%20States%20Geological%20Survey%20%28USGS%29%20historical%20topographic%20maps%20using%20contemporary%20building%20data%20as%20the%20contextual%20spatial%20layer.%20The%20presence%20of%20a%20building%20in%20the%20contemporary%20layer%20indicates%20a%20high%20probability%20that%20the%20same%20building%20can%20be%20found%20at%20that%20location%20on%20the%20historical%20map.%20We%20describe%20the%20design%20of%20an%20automatic%20sampling%20approach%20using%20these%20contemporary%20data%20to%20collect%20thousands%20of%20graphical%20examples%20for%20the%20symbol%20of%20interest.%20These%20graphical%20examples%20are%20then%20used%20for%20robust%20learning%20to%20then%20carry%20out%20feature%20extraction%20in%20the%20entire%20map.%20We%20employ%20a%20Convolutional%20Neural%20Network%20%28LeNet%29%20for%20the%20recognition%20task.%20Results%20are%20promising%20and%20will%20guide%20the%20next%20steps%20in%20this%20research%20to%20provide%20an%20unsupervised%20approach%20to%20extracting%20features%20from%20historical%20maps.%22%2C%22date%22%3A%222017-07%22%2C%22proceedingsTitle%22%3A%228th%20International%20Conference%20of%20Pattern%20Recognition%20Systems%20%28ICPRS%202017%29%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1049%5C%2Fcp.2017.0144%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F8362084%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A17%3A55Z%22%7D%7D%5D%7D
Wu, S. et al. A Closer Look at Segmentation Uncertainty of Scanned Historical Maps. 2022
Ståhl, N. et al. Identifying wetland areas in historical maps using deep convolutional neural networks. 2022
Pattern Detection (Lines)
5447768
pattern detection, lines
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22ISWZDHHB%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ELi%2C%20P.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2264756%27%3EMultiLineStringNet%3A%20a%20deep%20neural%20network%20for%20linear%20feature%20set%20recognition%3C%5C%2Fa%3E.%202024%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22MultiLineStringNet%3A%20a%20deep%20neural%20network%20for%20linear%20feature%20set%20recognition%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pengbo%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haowen%22%2C%22lastName%22%3A%22Yan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiaomin%22%2C%22lastName%22%3A%22Lu%22%7D%5D%2C%22abstractNote%22%3A%22Pattern%20recognition%20of%20linear%20feature%20sets%2C%20such%20as%20river%20networks%2C%20road%20networks%2C%20and%20contour%20clusters%2C%20is%20essential%20in%20cartography%20and%20geographic%20information%20science.%20Previous%20studies%20have%20investigated%20many%20methods%20to%20identify%20the%20patterns%20of%20linear%20feature%20sets%3B%20the%20key%20to%20each%20of%20these%20studies%20is%20to%20generate%20a%20reasonable%20and%20computable%20representation%20for%20each%20set.%20However%2C%20most%20existing%20methods%20are%20only%20designed%20for%20a%20specific%20task%20or%20data%20type%20and%20cannot%20provide%20a%20general%20solution%20for%20formalizing%20linear%20feature%20sets%20owing%20to%20their%20complex%20geometric%20characteristics%2C%20spatial%20relations%20and%20distributions.%20In%20addition%2C%20some%20methods%20require%20human%20involvement%20to%20specify%20characteristics%2C%20choose%20parameters%2C%20and%20determine%20the%20weights%20of%20different%20measures.%20To%20reduce%20human%20intervention%20and%20improve%20adaptability%20to%20various%20feature%20types%2C%20this%20paper%20proposes%20a%20novel%20deep%20learning%20architecture%20for%20learning%20the%20representations%20of%20linear%20feature%20sets.%20The%20presented%20model%20accepts%20vector%20data%20directly%20without%20extra%20data%20conversion%20and%20feature%20extraction.%20After%20generating%20local%2C%20neighborhood%2C%20and%20global%20representations%20of%20inputs%2C%20the%20representations%20are%20aggregated%20accordingly%20to%20perform%20pattern%20recognition%20tasks%2C%20including%20classification%20and%20segmentation.%20In%20the%20experiments%2C%20building%20groups%20classification%20and%20road%20interchanges%20segmentation%20achieved%20accuracies%20of%2098%25%20and%2089%25%2C%20respectively%2C%20indicating%20the%20model%5Cu2019s%20effectiveness%20and%20adaptability.%22%2C%22date%22%3A%222024-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2023.2264756%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2264756%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-18T12%3A56%3A21Z%22%7D%7D%2C%7B%22key%22%3A%227SHCXP77%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Xu%20et%20al.%22%2C%22parsedDate%22%3A%222022-10-03%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EXu%2C%20Y.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2022.2048834%27%3EApplication%20of%20a%20graph%20convolutional%20network%20with%20visual%20and%20semantic%20features%20to%20classify%20urban%20scenes%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Application%20of%20a%20graph%20convolutional%20network%20with%20visual%20and%20semantic%20features%20to%20classify%20urban%20scenes%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yongyang%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shuai%22%2C%22lastName%22%3A%22Jin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhanlong%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xuejing%22%2C%22lastName%22%3A%22Xie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sheng%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhong%22%2C%22lastName%22%3A%22Xie%22%7D%5D%2C%22abstractNote%22%3A%22Urban%20scenes%20consist%20of%20visual%20and%20semantic%20features%20and%20exhibit%20spatial%20relationships%20among%20land-use%20types%20%28e.g.%20industrial%20areas%20are%20far%20away%20from%20the%20residential%20zones%29.%20This%20study%20applied%20a%20graph%20convolutional%20network%20with%20neighborhood%20information%20%28henceforth%2C%20named%20the%20neighbour%20supporting%20graph%20convolutional%20neural%20network%29%2C%20to%20learn%20spatial%20relationships%20for%20urban%20scene%20classification.%20Furthermore%2C%20a%20co-occurrence%20analysis%20with%20visual%20and%20semantic%20features%20proceeded%20to%20improve%20the%20accuracy%20of%20urban%20scene%20classification.%20We%20tested%20the%20proposed%20method%20with%20the%20fifth%20ring%20road%20of%20Beijing%20with%20an%20overall%20classification%20accuracy%20of%200.827%20and%20a%20Kappa%20coefficient%20of%200.769.%20In%20comparison%20with%20other%20methods%2C%20such%20as%20support%20vector%20machine%2C%20random%20forest%2C%20and%20general%20graph%20convolutional%20network%2C%20the%20case%20study%20showed%20that%20the%20proposed%20method%20improved%20about%2010%25%20in%20urban%20scene%20classification.%22%2C%22date%22%3A%222022-10-03%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2022.2048834%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2022.2048834%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A50%3A30Z%22%7D%7D%2C%7B%22key%22%3A%22JVAQW9KR%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yang%20et%20al.%22%2C%22parsedDate%22%3A%222022-09%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EYang%2C%20M.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F9%5C%2F461%27%3EPattern%20Recognition%20and%20Segmentation%20of%20Administrative%20Boundaries%20Using%20a%20One-Dimensional%20Convolutional%20Neural%20Network%20and%20Grid%20Shape%20Context%20Descriptor%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Pattern%20Recognition%20and%20Segmentation%20of%20Administrative%20Boundaries%20Using%20a%20One-Dimensional%20Convolutional%20Neural%20Network%20and%20Grid%20Shape%20Context%20Descriptor%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haoran%22%2C%22lastName%22%3A%22Huang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yiqi%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiongfeng%22%2C%22lastName%22%3A%22Yan%22%7D%5D%2C%22abstractNote%22%3A%22Recognizing%20morphological%20patterns%20in%20lines%20and%20segmenting%20them%20into%20homogeneous%20segments%20is%20critical%20for%20line%20generalization%20and%20other%20applications.%20Due%20to%20the%20excessive%20dependence%20on%20handcrafted%20features%20in%20existing%20methods%20and%20their%20insufficient%20consideration%20of%20contextual%20information%2C%20we%20propose%20a%20novel%20pattern%20recognition%20and%20segmentation%20method%20for%20lines%2C%20based%20on%20deep%20learning%20and%20shape%20context%20descriptors.%20In%20this%20method%2C%20a%20line%20is%20divided%20into%20a%20series%20of%20consecutive%20linear%20units%20of%20equal%20length%2C%20termed%20lixels.%20A%20grid%20shape%20context%20descriptor%20%28GSCD%29%20was%20designed%20to%20extract%20the%20contextual%20features%20for%20each%20lixel.%20A%20one-dimensional%20convolutional%20neural%20network%20%281D-U-Net%29%20was%20constructed%20to%20classify%20the%20pattern%20type%20of%20each%20lixel%2C%20and%20adjacent%20lixels%20with%20the%20same%20pattern%20types%20were%20fused%20to%20obtain%20segmentation%20results.%20The%20proposed%20method%20was%20applied%20to%20administrative%20boundaries%2C%20which%20were%20segmented%20into%20components%20with%20three%20different%20patterns.%20The%20experiments%20showed%20that%20the%20lixel%20classification%20accuracy%20of%20the%201D-U-Net%20reached%2090.42%25.%20The%20consistency%20ratio%20was%2092.41%25%2C%20when%20compared%20with%20the%20manual%20segmentation%20results%2C%20which%20was%20higher%20than%20either%20of%20the%20two%20existing%20machine%20learning-based%20segmentation%20methods.%22%2C%22date%22%3A%222022%5C%2F9%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi11090461%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F9%5C%2F461%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A53%3A36Z%22%7D%7D%2C%7B%22key%22%3A%22RARNYL5W%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yang%20et%20al.%22%2C%22parsedDate%22%3A%222022-06-03%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EYang%2C%20M.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2021.2024195%27%3EDetecting%20interchanges%20in%20road%20networks%20using%20a%20graph%20convolutional%20network%20approach%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Detecting%20interchanges%20in%20road%20networks%20using%20a%20graph%20convolutional%20network%20approach%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chenjun%22%2C%22lastName%22%3A%22Jiang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiongfeng%22%2C%22lastName%22%3A%22Yan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tinghua%22%2C%22lastName%22%3A%22Ai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Minjun%22%2C%22lastName%22%3A%22Cao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenyuan%22%2C%22lastName%22%3A%22Chen%22%7D%5D%2C%22abstractNote%22%3A%22Detecting%20interchanges%20in%20road%20networks%20benefit%20many%20applications%2C%20such%20as%20vehicle%20navigation%20and%20map%20generalization.%20Traditional%20approaches%20use%20manually%20defined%20rules%20based%20on%20geometric%2C%20topological%2C%20or%20both%20properties%2C%20and%20thus%20can%20present%20challenges%20for%20structurally%20complex%20interchange.%20To%20overcome%20this%20drawback%2C%20we%20propose%20a%20graph-based%20deep%20learning%20approach%20for%20interchange%20detection.%20First%2C%20we%20model%20the%20road%20network%20as%20a%20graph%20in%20which%20the%20nodes%20represent%20road%20segments%2C%20and%20the%20edges%20represent%20their%20connections.%20The%20proposed%20approach%20computes%20the%20shape%20measures%20and%20contextual%20properties%20of%20individual%20road%20segments%20for%20features%20characterizing%20the%20associated%20nodes%20in%20the%20graph.%20Next%2C%20a%20semi-supervised%20approach%20uses%20these%20features%20and%20limited%20labeled%20interchanges%20to%20train%20a%20graph%20convolutional%20network%20that%20classifies%20these%20road%20segments%20into%20an%20interchange%20and%20non-interchange%20segments.%20Finally%2C%20an%20adaptive%20clustering%20approach%20groups%20the%20detected%20interchange%20segments%20into%20interchanges.%20Our%20experiment%20with%20the%20road%20networks%20of%20Beijing%20and%20Wuhan%20achieved%20a%20classification%20accuracy%20%3E95%25%20at%20a%20label%20rate%20of%2010%25.%20Moreover%2C%20the%20interchange%20detection%20precision%20and%20recall%20were%2079.6%20and%2075.7%25%20on%20the%20Beijing%20dataset%20and%2080.6%20and%2074.8%25%20on%20the%20Wuhan%20dataset%2C%20respectively%2C%20which%20were%2018.3%5Cu201336.1%20and%2017.4%5Cu201319.4%25%20higher%20than%20those%20of%20the%20existing%20approaches%20based%20on%20characteristic%20node%20clustering.%22%2C%22date%22%3A%222022-06-03%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2021.2024195%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2021.2024195%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A54%3A24Z%22%7D%7D%2C%7B%22key%22%3A%227AWW6QA5%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Djouvas%20et%20al.%22%2C%22parsedDate%22%3A%222021-11%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EDjouvas%2C%20C.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9610820%27%3EAutomating%20road%20junction%20identification%20using%20Crowdsourcing%20and%20Machine%20Learning%20on%20GPS%20transformed%20data%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Automating%20road%20junction%20identification%20using%20Crowdsourcing%20and%20Machine%20Learning%20on%20GPS%20transformed%20data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Constantinos%22%2C%22lastName%22%3A%22Djouvas%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ioannis%22%2C%22lastName%22%3A%22Despotis%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Christos%22%2C%22lastName%22%3A%22Christodoulou%22%7D%5D%2C%22abstractNote%22%3A%22Identifying%20road%20junctions%20is%20of%20great%20importance%20for%20a%20number%20of%20applications%20that%20utilize%20electronic%20maps%2C%20like%20navigation%20systems.%20State%20of%20the%20art%20research%20on%20this%20area%20utilizes%20aerial%20images%20%28usually%20captured%20by%20satellites%29%2C%20on%20which%20different%20image%20processing%20techniques%20are%20applied%20for%20automatically%20identifying%20road%20junctions.%20In%20this%20work%2C%20we%20propose%20a%20radical%20new%20approach%20to%20solve%20this%20problem.%20Instead%20of%20images%2C%20we%20propose%20an%20approach%20that%20relies%20on%20transformed%20Global%20Positioning%20System%20%28GPS%29%20data%20collected%20and%20analyzed%20using%20big%20data%20techniques.%20In%20particular%2C%20we%20apply%20machine%20learning%20on%20Crowdsource%20collected%20and%20annotated%20GPS%20data%20for%20automatically%20identifying%20junctions.%20Results%20suggest%20that%20the%20proposed%20technique%20is%20extremely%20effective.%20Furthermore%2C%20it%20is%20shown%20that%20it%20can%20be%20effective%20for%20solving%20the%20limitations%20that%20current%20approaches%20have.%22%2C%22date%22%3A%222021-11%22%2C%22proceedingsTitle%22%3A%222021%2016th%20International%20Workshop%20on%20Semantic%20and%20Social%20Media%20Adaptation%20%26%20Personalization%20%28SMAP%29%22%2C%22conferenceName%22%3A%222021%2016th%20International%20Workshop%20on%20Semantic%20and%20Social%20Media%20Adaptation%20%26%20Personalization%20%28SMAP%29%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FSMAP53521.2021.9610820%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9610820%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A18%3A34Z%22%7D%7D%2C%7B%22key%22%3A%22LZGC6FUF%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Kuo%20and%20Tsai%22%2C%22parsedDate%22%3A%222021-06%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EKuo%2C%20C.-L.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F10%5C%2F6%5C%2F377%27%3ERoad%20Characteristics%20Detection%20Based%20on%20Joint%20Convolutional%20Neural%20Networks%20with%20Adaptive%20Squares%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Road%20Characteristics%20Detection%20Based%20on%20Joint%20Convolutional%20Neural%20Networks%20with%20Adaptive%20Squares%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chiao-Ling%22%2C%22lastName%22%3A%22Kuo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ming-Hua%22%2C%22lastName%22%3A%22Tsai%22%7D%5D%2C%22abstractNote%22%3A%22The%20importance%20of%20road%20characteristics%20has%20been%20highlighted%2C%20as%20road%20characteristics%20are%20fundamental%20structures%20established%20to%20support%20many%20transportation-relevant%20services.%20However%2C%20there%20is%20still%20huge%20room%20for%20improvement%20in%20terms%20of%20types%20and%20performance%20of%20road%20characteristics%20detection.%20With%20the%20advantage%20of%20geographically%20tiled%20maps%20with%20high%20update%20rates%2C%20remarkable%20accessibility%2C%20and%20increasing%20availability%2C%20this%20paper%20proposes%20a%20novel%20simple%20deep-learning-based%20approach%2C%20namely%20joint%20convolutional%20neural%20networks%20%28CNNs%29%20adopting%20adaptive%20squares%20with%20combination%20rules%20to%20detect%20road%20characteristics%20from%20roadmap%20tiles.%20The%20proposed%20joint%20CNNs%20are%20responsible%20for%20the%20foreground%20and%20background%20image%20classification%20and%20various%20types%20of%20road%20characteristics%20classification%20from%20previous%20foreground%20images%2C%20raising%20detection%20accuracy.%20The%20adaptive%20squares%20with%20combination%20rules%20help%20efficiently%20focus%20road%20characteristics%2C%20augmenting%20the%20ability%20to%20detect%20them%20and%20provide%20optimal%20detection%20results.%20Five%20types%20of%20road%20characteristics%5Cu2014crossroads%2C%20T-junctions%2C%20Y-junctions%2C%20corners%2C%20and%20curves%5Cu2014are%20exploited%2C%20and%20experimental%20results%20demonstrate%20successful%20outcomes%20with%20outstanding%20performance%20in%20reality.%20The%20information%20of%20exploited%20road%20characteristics%20with%20location%20and%20type%20is%2C%20thus%2C%20converted%20from%20human-readable%20to%20machine-readable%2C%20the%20results%20will%20benefit%20many%20applications%20like%20feature%20point%20reminders%2C%20road%20condition%20reports%2C%20or%20alert%20detection%20for%20users%2C%20drivers%2C%20and%20even%20autonomous%20vehicles.%20We%20believe%20this%20approach%20will%20also%20enable%20a%20new%20path%20for%20object%20detection%20and%20geospatial%20information%20extraction%20from%20valuable%20map%20tiles.%22%2C%22date%22%3A%222021%5C%2F6%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi10060377%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F10%5C%2F6%5C%2F377%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A04%3A45Z%22%7D%7D%2C%7B%22key%22%3A%22PD5PUNDZ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Touya%20and%20Lokhat%22%2C%22parsedDate%22%3A%222020-04-13%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ETouya%2C%20G.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3382080%27%3EDeep%20Learning%20for%20Enrichment%20of%20Vector%20Spatial%20Databases%3A%20Application%20to%20Highway%20Interchange%3C%5C%2Fa%3E.%202020%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep%20Learning%20for%20Enrichment%20of%20Vector%20Spatial%20Databases%3A%20Application%20to%20Highway%20Interchange%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guillaume%22%2C%22lastName%22%3A%22Touya%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Imran%22%2C%22lastName%22%3A%22Lokhat%22%7D%5D%2C%22abstractNote%22%3A%22Spatial%20analysis%20and%20pattern%20recognition%20with%20vector%20spatial%20data%20is%20particularly%20useful%20to%20enrich%20raw%20data.%20In%20road%20networks%2C%20for%20instance%2C%20there%20are%20many%20patterns%20and%20structures%20that%20are%20implicit%20with%20only%20road%20line%20features%2C%20among%20which%20highway%20interchange%20appeared%20very%20complex%20to%20recognize%20with%20vector-based%20techniques.%20The%20goal%20is%20to%20find%20the%20roads%20that%20belong%20to%20an%20interchange%2C%20such%20as%20the%20slip%20roads%20and%20the%20highway%20roads%20connected%20to%20the%20slip%20roads.%20To%20go%20further%20than%20state-of-the-art%20vector-based%20techniques%2C%20this%20article%20proposes%20to%20use%20raster-based%20deep%20learning%20techniques%20to%20recognize%20highway%20interchanges.%20The%20contribution%20of%20this%20work%20is%20to%20study%20how%20to%20optimally%20convert%20vector%20data%20into%20small%20images%20suitable%20for%20state-of-the-art%20deep%20learning%20models.%20Image%20classification%20with%20a%20convolutional%20neural%20network%20%28i.e.%2C%20is%20there%20an%20interchange%20in%20this%20image%20or%20not%3F%29%20and%20image%20segmentation%20with%20a%20u-net%20%28i.e.%2C%20find%20the%20pixels%20that%20cover%20the%20interchange%29%20are%20experimented%20and%20give%20better%20results%20than%20existing%20vector-based%20techniques%20in%20this%20specific%20use%20case%20%2899.5%25%20against%2074%25%29.%22%2C%22date%22%3A%22April%2013%2C%202020%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3382080%22%2C%22ISSN%22%3A%222374-0353%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3382080%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A44%3A10Z%22%7D%7D%2C%7B%22key%22%3A%22WULHSDZL%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20et%20al.%22%2C%22parsedDate%22%3A%222020%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ELi%2C%20C.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1111%5C%2Ftgis.12681%27%3EA%20complex%20junction%20recognition%20method%20based%20on%20GoogLeNet%20model%3C%5C%2Fa%3E.%202020%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20complex%20junction%20recognition%20method%20based%20on%20GoogLeNet%20model%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chengming%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Honggang%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pengda%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yong%22%2C%22lastName%22%3A%22Yin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sichao%22%2C%22lastName%22%3A%22Liu%22%7D%5D%2C%22abstractNote%22%3A%22Complex%20junctions%20are%20typical%20microstructures%20in%20large-scale%20road%20networks%20with%20intricate%20structures%20and%20varied%20morphologies.%20It%20is%20a%20challenge%20to%20identify%20junctions%20in%20map%20generalization%20and%20car%20navigation%20tasks%20accurately.%20Generally%2C%20traditional%20recognition%20methods%20rely%20on%20low-level%20characteristics%20of%20manual%20design%2C%20such%20as%20parallelism%20and%20symmetry.%20In%20recent%20years%2C%20preliminary%20studies%20using%20deep%20learning-based%20recognition%20methods%20were%20conducted.%20However%2C%20only%20a%20few%20junction%20types%20can%20be%20recognized%20by%20existing%20methods%2C%20and%20these%20methods%20cannot%20effectively%20identify%20junctions%20with%20irregular%20shapes%20and%20numerous%20interference%20sections.%20Hence%2C%20this%20article%20proposes%20a%20complex%20junction%20recognition%20method%20based%20on%20the%20GoogLeNet%20model.%20First%2C%20the%20Delaunay%20triangulation%20clustering%20algorithm%20was%20used%20to%20automatically%20identify%20the%20center%20point%20and%20spatial%20range%20of%20training%20samples%20for%20complex%20junctions.%20Second%2C%20vector%20training%20samples%20were%20selected%20from%20OpenStreetMap%20%28OSM%29%20data%20of%2039%20cities%20across%20China%2C%20and%20the%20samples%20were%20then%20augmented%20through%20simplification%2C%20rotation%2C%20and%20mirroring.%20Finally%2C%20the%20vector%20sample%20data%20were%20transformed%20into%20raster%20images%2C%20and%20the%20GoogLeNet%20model%20was%20trained%20to%20learn%20the%20high-level%20fuzzy%20characteristics.%20Experiments%20based%20on%20OSM%20data%20from%20Tianjin%20city%2C%20China%2C%20revealed%20that%20compared%20with%20state-of-the-art%20methods%2C%20the%20proposed%20method%20effectively%20identified%20more%20types%20of%20complex%20junctions%20and%20achieved%20a%20significantly%20higher%20identification%20accuracy.%20Furthermore%2C%20the%20proposed%20method%20has%20strong%20generalizability%20and%20anti-interference%20capability.%22%2C%22date%22%3A%222020%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1111%5C%2Ftgis.12681%22%2C%22ISSN%22%3A%221467-9671%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1111%5C%2Ftgis.12681%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A05%3A52Z%22%7D%7D%2C%7B%22key%22%3A%22M9ZB9A9C%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20et%20al.%22%2C%22parsedDate%22%3A%222019-09%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ELi%2C%20H.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F8%5C%2F9%5C%2F421%27%3EAutomatic%20Identification%20of%20Overpass%20Structures%3A%20A%20Method%20of%20Deep%20Learning%3C%5C%2Fa%3E.%202019%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Automatic%20Identification%20of%20Overpass%20Structures%3A%20A%20Method%20of%20Deep%20Learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hao%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Maosheng%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Youxin%22%2C%22lastName%22%3A%22Huang%22%7D%5D%2C%22abstractNote%22%3A%22The%20identification%20of%20overpass%20structures%20in%20road%20networks%20has%20great%20significance%20for%20multi-scale%20modeling%20of%20roads%2C%20congestion%20analysis%2C%20and%20vehicle%20navigation.%20The%20traditional%20vector-based%20methods%20identify%20overpasses%20by%20the%20methodologies%20coming%20from%20computational%20geometry%20and%20graph%20theory%2C%20and%20they%20overly%20rely%20on%20the%20artificially%20designed%20features%20and%20have%20poor%20adaptability%20to%20complex%20scenes.%20This%20paper%20presents%20a%20novel%20method%20of%20identifying%20overpasses%20based%20on%20a%20target%20detection%20model%20%28Faster-RCNN%29.%20This%20method%20utilizes%20raster%20representation%20of%20vector%20data%20and%20convolutional%20neural%20networks%20%28CNNs%29%20to%20learn%20task%20adaptive%20features%20from%20raster%20data%2C%20then%20identifies%20the%20location%20of%20an%20overpass%20by%20a%20Region%20Proposal%20network%20%28RPN%29.%20The%20contribution%20of%20this%20paper%20is%3A%20%281%29%20An%20overpass%20labelling%20geodatabase%20%28OLGDB%29%20for%20the%20OpenStreetMap%20%28OSM%29%20road%20network%20data%20of%20six%20typical%20cities%20in%20China%20is%20established%3B%20%282%29%20Three%20different%20CNNs%20%28ZF-net%2C%20VGG-16%2C%20Inception-ResNet%20V2%29%20are%20integrated%20into%20Faster-RCNN%20and%20evaluated%20by%20accuracy%20performance%3B%20%283%29%20The%20optimal%20combination%20of%20learning%20rate%20and%20batchsize%20is%20determined%20by%20fine-tuning%3B%20and%20%284%29%20Five%20geometric%20metrics%20%28perimeter%2C%20area%2C%20squareness%2C%20circularity%2C%20and%20W%5C%2FL%29%20are%20synthetized%20into%20image%20bands%20to%20enhance%20the%20training%20data%2C%20and%20their%20contribution%20to%20the%20overpass%20identification%20task%20is%20determined.%20The%20experimental%20results%20have%20shown%20that%20the%20proposed%20method%20has%20good%20accuracy%20performance%20%28around%2090%25%29%2C%20and%20could%20be%20improved%20with%20the%20expansion%20of%20OLGDB%20and%20switching%20to%20more%20sophisticated%20target%20detection%20models.%20The%20deep%20learning%20target%20detection%20model%20has%20great%20application%20potential%20in%20large-scale%20road%20network%20pattern%20recognition%2C%20it%20can%20task-adaptively%20learn%20road%20structure%20features%20and%20easily%20extend%20to%20other%20road%20network%20patterns.%22%2C%22date%22%3A%222019%5C%2F9%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi8090421%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F8%5C%2F9%5C%2F421%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A06%3A29Z%22%7D%7D%2C%7B%22key%22%3A%228H4V3EUP%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22He%20and%20Qian%22%2C%22parsedDate%22%3A%222018-03-20%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EHe%20H.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27http%3A%5C%2F%5C%2Fxb.chinasmp.com%5C%2FCN%5C%2F10.11947%5C%2Fj.AGCS.2018.20170265%27%3EInterchange%20Recognition%20Method%20Based%20on%20CNN%3C%5C%2Fa%3E.%202018%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Interchange%20Recognition%20Method%20Based%20on%20CNN%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haiwei%22%2C%22lastName%22%3A%22He%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haizhong%22%2C%22lastName%22%3A%22Qian%22%7D%5D%2C%22abstractNote%22%3A%22OSM%5Cu6570%5Cu636e%5Cu4e2d%5Cu7acb%5Cu4ea4%5Cu6865%5Cu7ed3%5Cu6784%5Cu7684%5Cu8bc6%5Cu522b%5Cu548c%5Cu5206%5Cu7c7b%5Cuff0c%5Cu80fd%5Cu591f%5Cu4e3a%5Cu6784%5Cu5efa%5Cu591a%5Cu5c3a%5Cu5ea6%5Cu6a21%5Cu578b%5Cu3001%5Cu5bfc...%22%2C%22date%22%3A%222018-03-20%22%2C%22language%22%3A%22zh%22%2C%22DOI%22%3A%2210.11947%5C%2Fj.AGCS.2018.20170265%22%2C%22ISSN%22%3A%221001-1595%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fxb.chinasmp.com%5C%2FCN%5C%2F10.11947%5C%2Fj.AGCS.2018.20170265%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A58%3A47Z%22%7D%7D%5D%7D
Li, P. et al. MultiLineStringNet: a deep neural network for linear feature set recognition. 2024
Yang, M. et al. Detecting interchanges in road networks using a graph convolutional network approach. 2022
Touya, G. et al. Deep Learning for Enrichment of Vector Spatial Databases: Application to Highway Interchange. 2020
Li, C. et al. A complex junction recognition method based on GoogLeNet model. 2020
Li, H. et al. Automatic Identification of Overpass Structures: A Method of Deep Learning. 2019
He H. et al. Interchange Recognition Method Based on CNN. 2018
Pattern Detection (Polygons)
5447768
pattern detection, polygons
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22ISWZDHHB%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ELi%2C%20P.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2264756%27%3EMultiLineStringNet%3A%20a%20deep%20neural%20network%20for%20linear%20feature%20set%20recognition%3C%5C%2Fa%3E.%202024%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22MultiLineStringNet%3A%20a%20deep%20neural%20network%20for%20linear%20feature%20set%20recognition%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pengbo%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haowen%22%2C%22lastName%22%3A%22Yan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiaomin%22%2C%22lastName%22%3A%22Lu%22%7D%5D%2C%22abstractNote%22%3A%22Pattern%20recognition%20of%20linear%20feature%20sets%2C%20such%20as%20river%20networks%2C%20road%20networks%2C%20and%20contour%20clusters%2C%20is%20essential%20in%20cartography%20and%20geographic%20information%20science.%20Previous%20studies%20have%20investigated%20many%20methods%20to%20identify%20the%20patterns%20of%20linear%20feature%20sets%3B%20the%20key%20to%20each%20of%20these%20studies%20is%20to%20generate%20a%20reasonable%20and%20computable%20representation%20for%20each%20set.%20However%2C%20most%20existing%20methods%20are%20only%20designed%20for%20a%20specific%20task%20or%20data%20type%20and%20cannot%20provide%20a%20general%20solution%20for%20formalizing%20linear%20feature%20sets%20owing%20to%20their%20complex%20geometric%20characteristics%2C%20spatial%20relations%20and%20distributions.%20In%20addition%2C%20some%20methods%20require%20human%20involvement%20to%20specify%20characteristics%2C%20choose%20parameters%2C%20and%20determine%20the%20weights%20of%20different%20measures.%20To%20reduce%20human%20intervention%20and%20improve%20adaptability%20to%20various%20feature%20types%2C%20this%20paper%20proposes%20a%20novel%20deep%20learning%20architecture%20for%20learning%20the%20representations%20of%20linear%20feature%20sets.%20The%20presented%20model%20accepts%20vector%20data%20directly%20without%20extra%20data%20conversion%20and%20feature%20extraction.%20After%20generating%20local%2C%20neighborhood%2C%20and%20global%20representations%20of%20inputs%2C%20the%20representations%20are%20aggregated%20accordingly%20to%20perform%20pattern%20recognition%20tasks%2C%20including%20classification%20and%20segmentation.%20In%20the%20experiments%2C%20building%20groups%20classification%20and%20road%20interchanges%20segmentation%20achieved%20accuracies%20of%2098%25%20and%2089%25%2C%20respectively%2C%20indicating%20the%20model%5Cu2019s%20effectiveness%20and%20adaptability.%22%2C%22date%22%3A%222024-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2023.2264756%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2264756%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-18T12%3A56%3A21Z%22%7D%7D%2C%7B%22key%22%3A%227SHCXP77%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Xu%20et%20al.%22%2C%22parsedDate%22%3A%222022-10-03%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EXu%2C%20Y.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2022.2048834%27%3EApplication%20of%20a%20graph%20convolutional%20network%20with%20visual%20and%20semantic%20features%20to%20classify%20urban%20scenes%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Application%20of%20a%20graph%20convolutional%20network%20with%20visual%20and%20semantic%20features%20to%20classify%20urban%20scenes%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yongyang%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shuai%22%2C%22lastName%22%3A%22Jin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhanlong%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xuejing%22%2C%22lastName%22%3A%22Xie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sheng%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhong%22%2C%22lastName%22%3A%22Xie%22%7D%5D%2C%22abstractNote%22%3A%22Urban%20scenes%20consist%20of%20visual%20and%20semantic%20features%20and%20exhibit%20spatial%20relationships%20among%20land-use%20types%20%28e.g.%20industrial%20areas%20are%20far%20away%20from%20the%20residential%20zones%29.%20This%20study%20applied%20a%20graph%20convolutional%20network%20with%20neighborhood%20information%20%28henceforth%2C%20named%20the%20neighbour%20supporting%20graph%20convolutional%20neural%20network%29%2C%20to%20learn%20spatial%20relationships%20for%20urban%20scene%20classification.%20Furthermore%2C%20a%20co-occurrence%20analysis%20with%20visual%20and%20semantic%20features%20proceeded%20to%20improve%20the%20accuracy%20of%20urban%20scene%20classification.%20We%20tested%20the%20proposed%20method%20with%20the%20fifth%20ring%20road%20of%20Beijing%20with%20an%20overall%20classification%20accuracy%20of%200.827%20and%20a%20Kappa%20coefficient%20of%200.769.%20In%20comparison%20with%20other%20methods%2C%20such%20as%20support%20vector%20machine%2C%20random%20forest%2C%20and%20general%20graph%20convolutional%20network%2C%20the%20case%20study%20showed%20that%20the%20proposed%20method%20improved%20about%2010%25%20in%20urban%20scene%20classification.%22%2C%22date%22%3A%222022-10-03%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2022.2048834%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2022.2048834%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A50%3A30Z%22%7D%7D%2C%7B%22key%22%3A%22AVYX3SDY%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yan%20et%20al.%22%2C%22parsedDate%22%3A%222022-05-19%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EYan%2C%20X.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10106049.2020.1856195%27%3EA%20graph%20deep%20learning%20approach%20for%20urban%20building%20grouping%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20graph%20deep%20learning%20approach%20for%20urban%20building%20grouping%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiongfeng%22%2C%22lastName%22%3A%22Yan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tinghua%22%2C%22lastName%22%3A%22Ai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiaohua%22%2C%22lastName%22%3A%22Tong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qian%22%2C%22lastName%22%3A%22Liu%22%7D%5D%2C%22abstractNote%22%3A%22Identifying%20the%20spatial%20configurations%20of%20buildings%20and%20grouping%20them%20reasonably%20is%20an%20important%20task%20in%20cartography.%20This%20study%20developed%20a%20grouping%20approach%20using%20graph%20deep%20learning%20by%20integrating%20multiple%20cognitive%20features%20and%20manual%20cartographic%20experiences.%20Taking%20building%20center%20points%20as%20nodes%2C%20adjacent%20buildings%20were%20organized%20as%20a%20graph%20in%20which%20cognitive%20variables%20including%20size%2C%20orientation%2C%20and%20shape%20were%20defined%20for%20each%20node.%20Then%2C%20a%20learning%20model%20combining%20the%20graph%20convolution%20and%20neural%20network%20was%20designed%20to%20analyse%20the%20adjacent%20buildings%20modelled%20by%20the%20graph.%20The%20center%20points%20of%20groups%20were%20used%20as%20labels%20to%20train%20the%20positions%20of%20graph%20nodes%20and%20finally%2C%20a%20k-means%20algorithm%20was%20employed%20to%20obtain%20the%20grouping%20results%20based%20on%20the%20predicted%20node%20positions.%20Experiments%20confirmed%20that%20our%20approach%20can%20extract%20the%20inherent%20features%20describing%20the%20grouping%20relationship%20between%20buildings%20and%20performed%20better%20than%20two%20existing%20approaches%20referring%20to%20the%20ARI%20index%20%28from%200.647%20to%200.749%29.%22%2C%22date%22%3A%222022-05-19%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F10106049.2020.1856195%22%2C%22ISSN%22%3A%221010-6049%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10106049.2020.1856195%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A10%3A04Z%22%7D%7D%2C%7B%22key%22%3A%226MI3UM3K%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ma%20et%20al.%22%2C%22parsedDate%22%3A%222022-05%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EMa%2C%20L.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F5%5C%2F287%27%3EA%20New%20Graph-Based%20Fractality%20Index%20to%20Characterize%20Complexity%20of%20Urban%20Form%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20New%20Graph-Based%20Fractality%20Index%20to%20Characterize%20Complexity%20of%20Urban%20Form%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lei%22%2C%22lastName%22%3A%22Ma%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stefan%22%2C%22lastName%22%3A%22Seipel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sven%20Anders%22%2C%22lastName%22%3A%22Brandt%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ding%22%2C%22lastName%22%3A%22Ma%22%7D%5D%2C%22abstractNote%22%3A%22Examining%20the%20complexity%20of%20urban%20form%20may%20help%20to%20understand%20human%20behavior%20in%20urban%20spaces%2C%20thereby%20improving%20the%20conditions%20for%20sustainable%20design%20of%20future%20cities.%20Metrics%2C%20such%20as%20fractal%20dimension%2C%20ht-index%2C%20and%20cumulative%20rate%20of%20growth%20%28CRG%29%20index%20have%20been%20proposed%20to%20measure%20this%20complexity.%20However%2C%20as%20these%20indicators%20are%20statistical%20rather%20than%20spatial%2C%20they%20result%20in%20an%20inability%20to%20characterize%20the%20spatial%20complexity%20of%20urban%20forms%2C%20such%20as%20building%20footprints.%20To%20overcome%20this%20problem%2C%20this%20paper%20proposes%20a%20graph-based%20fractality%20index%20%28GFI%29%2C%20which%20is%20based%20on%20a%20hybrid%20of%20fractal%20theory%20and%20deep%20learning%20techniques.%20First%2C%20to%20quantify%20the%20spatial%20complexity%2C%20several%20fractal%20variants%20were%20synthesized%20to%20train%20a%20deep%20graph%20convolutional%20neural%20network.%20Next%2C%20building%20footprints%20in%20London%20were%20used%20to%20test%20the%20method%2C%20where%20the%20results%20showed%20that%20the%20proposed%20framework%20performed%20better%20than%20the%20traditional%20indices%2C%20i.e.%2C%20the%20index%20is%20capable%20of%20differentiating%20complex%20patterns.%20Another%20advantage%20is%20that%20it%20seems%20to%20assure%20that%20the%20trained%20deep%20learning%20is%20objective%20and%20not%20affected%20by%20potential%20biases%20in%20empirically%20selected%20training%20datasets%20Furthermore%2C%20the%20possibility%20to%20connect%20fractal%20theory%20and%20deep%20learning%20techniques%20on%20complexity%20issues%20opens%20up%20new%20possibilities%20for%20data-driven%20GIS%20science.%22%2C%22date%22%3A%222022%5C%2F5%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi11050287%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F5%5C%2F287%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A11%3A23Z%22%7D%7D%2C%7B%22key%22%3A%223MKTIIZ5%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Hu%20et%20al.%22%2C%22parsedDate%22%3A%222022-05%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EHu%2C%20Y.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F5%5C%2F311%27%3EFew-Shot%20Building%20Footprint%20Shape%20Classification%20with%20Relation%20Network%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Few-Shot%20Building%20Footprint%20Shape%20Classification%20with%20Relation%20Network%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yaohui%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chun%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zheng%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Junkui%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhigang%22%2C%22lastName%22%3A%22Han%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jianzhong%22%2C%22lastName%22%3A%22Guo%22%7D%5D%2C%22abstractNote%22%3A%22Buildings%20are%20important%20entity%20objects%20of%20cities%2C%20and%20the%20classification%20of%20building%20shapes%20plays%20an%20indispensable%20role%20in%20the%20cognition%20and%20planning%20of%20the%20urban%20structure.%20In%20recent%20years%2C%20some%20deep%20learning%20methods%20have%20been%20proposed%20for%20recognizing%20the%20shapes%20of%20building%20footprints%20in%20modern%20electronic%20maps.%20Furthermore%2C%20their%20performance%20depends%20on%20enough%20labeled%20samples%20for%20each%20class%20of%20building%20footprints.%20However%2C%20it%20is%20impractical%20to%20label%20enough%20samples%20for%20each%20type%20of%20building%20footprint%20shapes.%20Therefore%2C%20the%20deep%20learning%20methods%20using%20few%20labeled%20samples%20are%20more%20preferable%20to%20recognize%20and%20classify%20the%20building%20footprint%20shapes.%20In%20this%20paper%2C%20we%20propose%20a%20relation%20network%20based%20method%20for%20the%20recognization%20of%20building%20footprint%20shapes%20with%20few%20labeled%20samples.%20Relation%20network%2C%20composed%20of%20embedding%20module%20and%20relation%20module%2C%20is%20a%20metric%20based%20few-shot%20method%20which%20aims%20to%20learn%20a%20generalized%20metric%20function%20and%20predict%20the%20types%20of%20the%20new%20samples%20according%20to%20their%20relation%20with%20the%20prototypes%20of%20these%20few%20labeled%20samples.%20To%20better%20extract%20the%20shape%20features%20of%20the%20building%20footprints%20in%20the%20form%20of%20vector%20polygons%2C%20we%20have%20taken%20the%20TriangleConv%20embedding%20module%20to%20act%20as%20the%20embedding%20module%20of%20the%20relation%20network.%20We%20validate%20the%20effectiveness%20of%20our%20method%20based%20on%20a%20building%20footprint%20dataset%20with%2010%20typical%20shapes%20and%20compare%20it%20with%20three%20classical%20few-shot%20learning%20methods%20in%20accuracy.%20The%20results%20show%20that%20our%20method%20performs%20better%20for%20the%20classification%20of%20building%20footprint%20shapes%20with%20few%20labeled%20samples.%20For%20example%2C%20the%20accuracy%20reached%2089.40%25%20for%20the%202-way%205-shot%20classification%20task%20where%20there%20are%20only%20two%20classes%20of%20samples%20in%20the%20task%20and%20five%20labeled%20samples%20for%20each%20class.%22%2C%22date%22%3A%222022%5C%2F5%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi11050311%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F5%5C%2F311%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A59%3A04Z%22%7D%7D%2C%7B%22key%22%3A%22WEA2HK8N%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20et%20al.%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ELi%2C%20Y.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2076-3417%5C%2F12%5C%2F19%5C%2F10001%27%3EA%20Skeleton-Line-Based%20Graph%20Convolutional%20Neural%20Network%20for%20Areal%20Settlements%27%20Shape%20Classification%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20Skeleton-Line-Based%20Graph%20Convolutional%20Neural%20Network%20for%20Areal%20Settlements%27%20Shape%20Classification%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yiyan%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiaomin%22%2C%22lastName%22%3A%22Lu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haowen%22%2C%22lastName%22%3A%22Yan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenning%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pengbo%22%2C%22lastName%22%3A%22Li%22%7D%5D%2C%22abstractNote%22%3A%22Among%20the%20geographic%20elements%2C%20shape%20recognition%20and%20classification%20is%20one%20of%20the%20im%20portant%20elements%20of%20map%20cartographic%20generalization%2C%20and%20the%20shape%20classification%20of%20an%20areal%20settlement%20is%20an%20important%20part%20of%20geospatial%20vector%20data.%20However%2C%20there%20is%20currently%20no%20relatively%20simple%20and%20efficient%20way%20to%20achieve%20areal%20settlement%20classification.%20Therefore%2C%20we%20combined%20the%20skeleton%20line%20vector%20data%20of%20an%20areal%20settlement%20and%20the%20graph%20convolutional%20neural%20network%20to%20propose%20an%20areal%20settlement%20shape%20classification%20method%20that%20%281%29%20extracts%20the%20skeleton%20line%20of%20the%20areal%20settlement%20to%20form%20a%20dual%20graph%20with%20nodes%20as%20edges%2C%20%282%29%20extracts%20multiple%20features%20to%20obtain%20a%20graph%20representation%20of%20the%20shape%2C%20%283%29%20extracts%20and%20aggregates%20the%20shape%20information%20represented%20by%20the%20areal%20settlement%20skeleton%20line%20using%20the%20graph%20convolutional%20neural%20network%20for%20multiple%20rounds%20to%20extract%20high-dimensional%20shape%20information%2C%20and%20%284%29%20completes%20the%20shape%20classification%20of%20the%20high-dimensional%20shape%20information.%20The%20experiment%20used%20240%20samples%2C%20and%20the%20classification%20accuracy%20was%2093.3%25%2C%20with%20areal%20settlement%20shapes%20of%20E-%2C%20F-%2C%20and%20H-type%20achieving%20F-measures%20of%2096.5%25%2C%2092.3%25%2C%20and%20100%25%2C%20respectively.%20The%20result%20shows%20that%20the%20classification%20method%20of%20the%20areal%20settlement%20shape%20has%20high%20accuracy.%22%2C%22date%22%3A%222022%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fapp121910001%22%2C%22ISSN%22%3A%222076-3417%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2076-3417%5C%2F12%5C%2F19%5C%2F10001%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A07%3A00Z%22%7D%7D%2C%7B%22key%22%3A%22VERPGUHH%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Liu%20et%20al.%22%2C%22parsedDate%22%3A%222021-10%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ELiu%2C%20C.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F10%5C%2F10%5C%2F687%27%3ETriangleConv%3A%20A%20Deep%20Point%20Convolutional%20Network%20for%20Recognizing%20Building%20Shapes%20in%20Map%20Space%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22TriangleConv%3A%20A%20Deep%20Point%20Convolutional%20Network%20for%20Recognizing%20Building%20Shapes%20in%20Map%20Space%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chun%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yaohui%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zheng%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Junkui%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhigang%22%2C%22lastName%22%3A%22Han%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jianzhong%22%2C%22lastName%22%3A%22Guo%22%7D%5D%2C%22abstractNote%22%3A%22The%20classification%20and%20recognition%20of%20the%20shapes%20of%20buildings%20in%20map%20space%20play%20an%20important%20role%20in%20spatial%20cognition%2C%20cartographic%20generalization%2C%20and%20map%20updating.%20As%20buildings%20in%20map%20space%20are%20often%20represented%20as%20the%20vector%20data%2C%20research%20was%20conducted%20to%20learn%20the%20feature%20representations%20of%20the%20buildings%20and%20recognize%20their%20shapes%20based%20on%20graph%20neural%20networks.%20Due%20to%20the%20principles%20of%20graph%20neural%20networks%2C%20it%20is%20necessary%20to%20construct%20a%20graph%20to%20represent%20the%20adjacency%20relationships%20between%20the%20points%20%28i.e.%2C%20the%20vertices%20of%20the%20polygons%20shaping%20the%20buildings%29%2C%20and%20extract%20a%20list%20of%20geometric%20features%20for%20each%20point.%20This%20paper%20proposes%20a%20deep%20point%20convolutional%20network%20to%20recognize%20building%20shapes%2C%20which%20executes%20the%20convolution%20directly%20on%20the%20points%20of%20the%20buildings%20without%20constructing%20the%20graphs%20and%20extracting%20the%20geometric%20features%20of%20the%20points.%20A%20new%20convolution%20operator%20named%20TriangleConv%20was%20designed%20to%20learn%20the%20feature%20representations%20of%20each%20point%20by%20aggregating%20the%20features%20of%20the%20point%20and%20the%20local%20triangle%20constructed%20by%20the%20point%20and%20its%20two%20adjacency%20points.%20The%20proposed%20method%20was%20evaluated%20and%20compared%20with%20related%20methods%20based%20on%20a%20dataset%20consisting%20of%205010%20vector%20buildings.%20In%20terms%20of%20accuracy%2C%20macro-precision%2C%20macro-recall%2C%20and%20macro-F1%2C%20the%20results%20show%20that%20the%20proposed%20method%20has%20comparable%20performance%20with%20typical%20graph%20neural%20networks%20of%20GCN%2C%20GAT%2C%20and%20GraphSAGE%2C%20and%20point%20cloud%20neural%20networks%20of%20PointNet%2C%20PointNet%2B%2B%2C%20and%20DGCNN%20in%20the%20task%20of%20recognizing%20and%20classifying%20building%20shapes%20in%20map%20space.%22%2C%22date%22%3A%222021%5C%2F10%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi10100687%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F10%5C%2F10%5C%2F687%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A09%3A20Z%22%7D%7D%2C%7B%22key%22%3A%2268AI5S2K%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yan%20et%20al.%22%2C%22parsedDate%22%3A%222021-03-04%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EYan%2C%20X.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2020.1768260%27%3EGraph%20convolutional%20autoencoder%20model%20for%20the%20shape%20coding%20and%20cognition%20of%20buildings%20in%20maps%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Graph%20convolutional%20autoencoder%20model%20for%20the%20shape%20coding%20and%20cognition%20of%20buildings%20in%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiongfeng%22%2C%22lastName%22%3A%22Yan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tinghua%22%2C%22lastName%22%3A%22Ai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiaohua%22%2C%22lastName%22%3A%22Tong%22%7D%5D%2C%22abstractNote%22%3A%22The%20shape%20of%20a%20geospatial%20object%20is%20an%20important%20characteristic%20and%20a%20significant%20factor%20in%20spatial%20cognition.%20Existing%20shape%20representation%20methods%20for%20vector-structured%20objects%20in%20the%20map%20space%20are%20mainly%20based%20on%20geometric%20and%20statistical%20measures.%20Considering%20that%20shape%20is%20complicated%20and%20cognitively%20related%2C%20this%20study%20develops%20a%20learning%20strategy%20to%20combine%20multiple%20features%20extracted%20from%20its%20boundary%20and%20obtain%20a%20reasonable%20shape%20representation.%20Taking%20building%20data%20as%20example%2C%20this%20study%20first%20models%20the%20shape%20of%20a%20building%20using%20a%20graph%20structure%20and%20extracts%20multiple%20features%20for%20each%20vertex%20based%20on%20the%20local%20and%20regional%20structures.%20A%20graph%20convolutional%20autoencoder%20%28GCAE%29%20model%20comprising%20graph%20convolution%20and%20autoencoder%20architecture%20is%20proposed%20to%20analyze%20the%20modeled%20graph%20and%20realize%20shape%20coding%20through%20unsupervised%20learning.%20Experiments%20show%20that%20the%20GCAE%20model%20can%20produce%20a%20cognitively%20compliant%20shape%20coding%2C%20with%20the%20ability%20to%20distinguish%20different%20shapes.%20It%20outperforms%20existing%20methods%20in%20terms%20of%20similarity%20measurements.%20Furthermore%2C%20the%20shape%20coding%20is%20experimentally%20proven%20to%20be%20effective%20in%20representing%20the%20local%20and%20global%20characteristics%20of%20building%20shape%20in%20application%20scenarios%20such%20as%20shape%20retrieval%20and%20matching.%22%2C%22date%22%3A%222021-03-04%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2020.1768260%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2020.1768260%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A51%3A45Z%22%7D%7D%2C%7B%22key%22%3A%22TDCZHKJS%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhao%20et%20al.%22%2C%22parsedDate%22%3A%222020-09-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EZhao%2C%20R.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2020.1757512%27%3ERecognition%20of%20building%20group%20patterns%20using%20graph%20convolutional%20network%3C%5C%2Fa%3E.%202020%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Recognition%20of%20building%20group%20patterns%20using%20graph%20convolutional%20network%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rong%22%2C%22lastName%22%3A%22Zhao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tinghua%22%2C%22lastName%22%3A%22Ai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenhao%22%2C%22lastName%22%3A%22Yu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yakun%22%2C%22lastName%22%3A%22He%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yilang%22%2C%22lastName%22%3A%22Shen%22%7D%5D%2C%22abstractNote%22%3A%22Recognition%20of%20building%20group%20patterns%20is%20of%20great%20significance%20for%20understanding%20and%20modeling%20the%20urban%20space.%20However%2C%20many%20current%20methods%20cannot%20fully%20utilize%20spatial%20information%20and%20have%20trouble%20efficiently%20dealing%20with%20topographic%20data%20with%20high%20complexity.%20The%20design%20of%20intelligent%20computational%20models%20that%20can%20act%20directly%20on%20topographic%20data%20to%20extract%20spatial%20features%20is%20critical.%20To%20this%20end%2C%20we%20propose%20a%20novel%20deep%20neural%20network%20based%20on%20graph%20convolutions%20to%20automatically%20identify%20building%20group%20patterns%20with%20arbitrary%20forms.%20The%20method%20first%20models%20buildings%20by%20a%20general%20graph%2C%20and%20then%20the%20neural%20network%20simultaneously%20learns%20the%20structural%20information%20as%20well%20as%20vertex%20attributes%20to%20classify%20building%20objects.%20We%20apply%20this%20method%20to%20real%20building%20data%2C%20and%20the%20experimental%20results%20show%20that%20the%20proposed%20method%20can%20effectively%20capture%20spatial%20information%20to%20make%20more%20accurate%20predictions%20than%20traditional%20methods.%22%2C%22date%22%3A%222020-09-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2020.1757512%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2020.1757512%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A55%3A40Z%22%7D%7D%2C%7B%22key%22%3A%222X7GQPES%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yan%20et%20al.%22%2C%22parsedDate%22%3A%222019-04-01%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EYan%2C%20X.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0924271619300437%27%3EA%20graph%20convolutional%20neural%20network%20for%20classification%20of%20building%20patterns%20using%20spatial%20vector%20data%3C%5C%2Fa%3E.%202019%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20graph%20convolutional%20neural%20network%20for%20classification%20of%20building%20patterns%20using%20spatial%20vector%20data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiongfeng%22%2C%22lastName%22%3A%22Yan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tinghua%22%2C%22lastName%22%3A%22Ai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hongmei%22%2C%22lastName%22%3A%22Yin%22%7D%5D%2C%22abstractNote%22%3A%22Machine%20learning%20methods%2C%20specifically%2C%20convolutional%20neural%20networks%20%28CNNs%29%2C%20have%20emerged%20as%20an%20integral%20part%20of%20scientific%20research%20in%20many%20disciplines.%20However%2C%20these%20powerful%20methods%20often%20fail%20to%20perform%20pattern%20analysis%20and%20knowledge%20mining%20with%20spatial%20vector%20data%20because%20in%20most%20cases%2C%20such%20data%20are%20not%20underlying%20grid-like%20or%20array%20structures%20but%20can%20only%20be%20modeled%20as%20graph%20structures.%20The%20present%20study%20introduces%20a%20novel%20graph%20convolution%20by%20converting%20it%20from%20the%20vertex%20domain%20into%20a%20point-wise%20product%20in%20the%20Fourier%20domain%20using%20the%20graph%20Fourier%20transform%20and%20convolution%20theorem.%20In%20addition%2C%20the%20graph%20convolutional%20neural%20network%20%28GCNN%29%20architecture%20is%20proposed%20to%20analyze%20graph-structured%20spatial%20vector%20data.%20The%20focus%20of%20this%20study%20is%20the%20classical%20task%20of%20building%20pattern%20classification%2C%20which%20remains%20limited%20by%20the%20use%20of%20design%20rules%20and%20manually%20extracted%20features%20for%20specific%20patterns.%20The%20spatial%20vector%20data%20representing%20grouped%20buildings%20are%20modeled%20as%20graphs%2C%20and%20indices%20for%20the%20characteristics%20of%20individual%20buildings%20are%20investigated%20to%20collect%20the%20input%20variables.%20The%20pattern%20features%20of%20these%20graphs%20are%20directly%20extracted%20by%20training%20labeled%20data.%20Experiments%20confirmed%20that%20the%20GCNN%20produces%20satisfactory%20results%20in%20terms%20of%20identifying%20regular%20and%20irregular%20patterns%2C%20and%20thus%20achieves%20a%20significant%20improvement%20over%20existing%20methods.%20In%20summary%2C%20the%20GCNN%20has%20considerable%20potential%20for%20the%20analysis%20of%20graph-structured%20spatial%20vector%20data%20as%20well%20as%20scope%20for%20further%20improvement.%22%2C%22date%22%3A%222019-04-01%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.isprsjprs.2019.02.010%22%2C%22ISSN%22%3A%220924-2716%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0924271619300437%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A51%3A24Z%22%7D%7D%2C%7B%22key%22%3A%22FE2MV2F4%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Lee%20et%20al.%22%2C%22parsedDate%22%3A%222017-10%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ELee%2C%20J.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F6%5C%2F10%5C%2F309%27%3EMachine%20Learning%20Classification%20of%20Buildings%20for%20Map%20Generalization%3C%5C%2Fa%3E.%202017%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Machine%20Learning%20Classification%20of%20Buildings%20for%20Map%20Generalization%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jaeeun%22%2C%22lastName%22%3A%22Lee%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hanme%22%2C%22lastName%22%3A%22Jang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jonghyeon%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kiyun%22%2C%22lastName%22%3A%22Yu%22%7D%5D%2C%22abstractNote%22%3A%22A%20critical%20problem%20in%20mapping%20data%20is%20the%20frequent%20updating%20of%20large%20data%20sets.%20To%20solve%20this%20problem%2C%20the%20updating%20of%20small-scale%20data%20based%20on%20large-scale%20data%20is%20very%20effective.%20Various%20map%20generalization%20techniques%2C%20such%20as%20simplification%2C%20displacement%2C%20typification%2C%20elimination%2C%20and%20aggregation%2C%20must%20therefore%20be%20applied.%20In%20this%20study%2C%20we%20focused%20on%20the%20elimination%20and%20aggregation%20of%20the%20building%20layer%2C%20for%20which%20each%20building%20in%20a%20large%20scale%20was%20classified%20as%20%5Cu201c0-eliminated%2C%5Cu201d%20%5Cu201c1-retained%2C%5Cu201d%20or%20%5Cu201c2-aggregated.%5Cu201d%20Machine-learning%20classification%20algorithms%20were%20then%20used%20for%20classifying%20the%20buildings.%20The%20data%20of%201%3A1000%20scale%20and%201%3A25%2C000%20scale%20digital%20maps%20obtained%20from%20the%20National%20Geographic%20Information%20Institute%20were%20used.%20We%20applied%20to%20these%20data%20various%20machine-learning%20classification%20algorithms%2C%20including%20naive%20Bayes%20%28NB%29%2C%20decision%20tree%20%28DT%29%2C%20k-nearest%20neighbor%20%28k-NN%29%2C%20and%20support%20vector%20machine%20%28SVM%29.%20The%20overall%20accuracies%20of%20each%20algorithm%20were%20satisfactory%3A%20DT%2C%2088.96%25%3B%20k-NN%2C%2088.27%25%3B%20SVM%2C%2087.57%25%3B%20and%20NB%2C%2079.50%25.%20Although%20elimination%20is%20a%20direct%20part%20of%20the%20proposed%20process%2C%20generalization%20operations%2C%20such%20as%20simplification%20and%20aggregation%20of%20polygons%2C%20must%20still%20be%20performed%20for%20buildings%20classified%20as%20retained%20and%20aggregated.%20Thus%2C%20these%20algorithms%20can%20be%20used%20for%20building%20classification%20and%20can%20serve%20as%20preparatory%20steps%20for%20building%20generalization.%22%2C%22date%22%3A%222017%5C%2F10%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi6100309%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F6%5C%2F10%5C%2F309%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A05%3A42Z%22%7D%7D%5D%7D
Li, P. et al. MultiLineStringNet: a deep neural network for linear feature set recognition. 2024
Yan, X. et al. A graph deep learning approach for urban building grouping. 2022
Ma, L. et al. A New Graph-Based Fractality Index to Characterize Complexity of Urban Form. 2022
Hu, Y. et al. Few-Shot Building Footprint Shape Classification with Relation Network. 2022
Zhao, R. et al. Recognition of building group patterns using graph convolutional network. 2020
Lee, J. et al. Machine Learning Classification of Buildings for Map Generalization. 2017
Content Description
5447768
content description
1
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22ZRH9GXWR%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Xu%20and%20Tao%22%2C%22parsedDate%22%3A%222024-04%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EXu%2C%20J.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F13%5C%2F4%5C%2F127%27%3EMap%20Reading%20and%20Analysis%20with%20GPT-4V%28ision%29%3C%5C%2Fa%3E.%202024%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Map%20Reading%20and%20Analysis%20with%20GPT-4V%28ision%29%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jinwen%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ran%22%2C%22lastName%22%3A%22Tao%22%7D%5D%2C%22abstractNote%22%3A%22In%20late%202023%2C%20the%20image-reading%20capability%20added%20to%20a%20Generative%20Pre-trained%20Transformer%20%28GPT%29%20framework%20provided%20the%20opportunity%20to%20potentially%20revolutionize%20the%20way%20we%20view%20and%20understand%20geographic%20maps%2C%20the%20core%20component%20of%20cartography%2C%20geography%2C%20and%20spatial%20data%20science.%20In%20this%20study%2C%20we%20explore%20reading%20and%20analyzing%20maps%20with%20the%20latest%20version%20of%20GPT-4-vision-preview%20%28GPT-4V%29%2C%20to%20fully%20evaluate%20its%20advantages%20and%20disadvantages%20in%20comparison%20with%20human%20eye-based%20visual%20inspections.%20We%20found%20that%20GPT-4V%20is%20able%20to%20properly%20retrieve%20information%20from%20various%20types%20of%20maps%20in%20different%20scales%20and%20spatiotemporal%20resolutions.%20GPT-4V%20can%20also%20perform%20basic%20map%20analysis%2C%20such%20as%20identifying%20visual%20changes%20before%20and%20after%20a%20natural%20disaster.%20It%20has%20the%20potential%20to%20replace%20human%20efforts%20by%20examining%20batches%20of%20maps%2C%20accurately%20extracting%20information%20from%20maps%2C%20and%20linking%20observed%20patterns%20with%20its%20pre-trained%20large%20dataset.%20However%2C%20it%20is%20encumbered%20by%20limitations%20such%20as%20diminished%20accuracy%20in%20visual%20content%20extraction%20and%20a%20lack%20of%20validation.%20This%20paper%20sets%20an%20example%20of%20effectively%20using%20GPT-4V%20for%20map%20reading%20and%20analytical%20tasks%2C%20which%20is%20a%20promising%20application%20for%20large%20multimodal%20models%2C%20large%20language%20models%2C%20and%20artificial%20intelligence.%22%2C%22date%22%3A%222024%5C%2F4%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi13040127%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F13%5C%2F4%5C%2F127%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T14%3A43%3A50Z%22%7D%7D%5D%7D
Xu, J. et al. Map Reading and Analysis with GPT-4V(ision). 2024
Metadata Retrieval
5447768
metadata retrieval
1
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22SI9EXWZW%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Hu%20et%20al.%22%2C%22parsedDate%22%3A%222022-04-03%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EHu%2C%20Y.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2021.1968407%27%3EEnriching%20the%20metadata%20of%20map%20images%3A%20a%20deep%20learning%20approach%20with%20GIS-based%20data%20augmentation%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Enriching%20the%20metadata%20of%20map%20images%3A%20a%20deep%20learning%20approach%20with%20GIS-based%20data%20augmentation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yingjie%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhipeng%22%2C%22lastName%22%3A%22Gui%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jimin%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Muxian%22%2C%22lastName%22%3A%22Li%22%7D%5D%2C%22abstractNote%22%3A%22Maps%20in%20the%20form%20of%20digital%20images%20are%20widely%20available%20in%20geoportals%2C%20Web%20pages%2C%20and%20other%20data%20sources.%20The%20metadata%20of%20map%20images%2C%20such%20as%20spatial%20extents%20and%20place%20names%2C%20are%20critical%20for%20their%20indexing%20and%20searching.%20However%2C%20many%20map%20images%20have%20either%20mismatched%20metadata%20or%20no%20metadata%20at%20all.%20Recent%20developments%20in%20deep%20learning%20offer%20new%20possibilities%20for%20enriching%20the%20metadata%20of%20map%20images%20via%20image-based%20information%20extraction.%20One%20major%20challenge%20of%20using%20deep%20learning%20models%20is%20that%20they%20often%20require%20large%20amounts%20of%20training%20data%20that%20have%20to%20be%20manually%20labeled.%20To%20address%20this%20challenge%2C%20this%20paper%20presents%20a%20deep%20learning%20approach%20with%20GIS-based%20data%20augmentation%20that%20can%20automatically%20generate%20labeled%20training%20map%20images%20from%20shapefiles%20using%20GIS%20operations.%20We%20utilize%20such%20an%20approach%20to%20enrich%20the%20metadata%20of%20map%20images%20by%20adding%20spatial%20extents%20and%20place%20names%20extracted%20from%20map%20images.%20We%20evaluate%20this%20GIS-based%20data%20augmentation%20approach%20by%20using%20it%20to%20train%20multiple%20deep%20learning%20models%20and%20testing%20them%20on%20two%20different%20datasets%3A%20a%20Web%20Map%20Service%20image%20dataset%20at%20the%20continental%20scale%20and%20an%20online%20map%20image%20dataset%20at%20the%20state%20scale.%20We%20then%20discuss%20the%20advantages%20and%20limitations%20of%20the%20proposed%20approach.%22%2C%22date%22%3A%222022-04-03%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2021.1968407%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2021.1968407%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A09%3A28Z%22%7D%7D%2C%7B%22key%22%3A%22VFGFFKZM%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ELi%2C%20J.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27http%3A%5C%2F%5C%2Frave.ohiolink.edu%5C%2Fetdc%5C%2Fview%3Facc_num%3Dosu1650493323790506%27%3EComputational%20Cartographic%20Recognition%3A%20Exploring%20the%20Use%20of%20Machine%20Learning%20and%20Other%20Computational%20Approaches%20to%20Map%20Reading%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22thesis%22%2C%22title%22%3A%22Computational%20Cartographic%20Recognition%3A%20Exploring%20the%20Use%20of%20Machine%20Learning%20and%20Other%20Computational%20Approaches%20to%20Map%20Reading%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jialin%22%2C%22lastName%22%3A%22Li%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22thesisType%22%3A%22Dissertation%22%2C%22university%22%3A%22The%20Ohio%20State%20University%22%2C%22date%22%3A%222022%22%2C%22language%22%3A%22en%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Frave.ohiolink.edu%5C%2Fetdc%5C%2Fview%3Facc_num%3Dosu1650493323790506%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A59%3A01Z%22%7D%7D%2C%7B%22key%22%3A%22T9YRRRC6%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Touya%20et%20al.%22%2C%22parsedDate%22%3A%222020-08%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ETouya%2C%20G.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fhal.archives-ouvertes.fr%5C%2Fhal-02873414%27%3EInferring%20the%20scale%20and%20content%20of%20a%20map%20using%20deep%20learning%3C%5C%2Fa%3E.%202020%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Inferring%20the%20scale%20and%20content%20of%20a%20map%20using%20deep%20learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guillaume%22%2C%22lastName%22%3A%22Touya%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22F%22%2C%22lastName%22%3A%22Brisebard%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22F%22%2C%22lastName%22%3A%22Quinton%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Azelle%22%2C%22lastName%22%3A%22Courtial%22%7D%5D%2C%22abstractNote%22%3A%22Visually%20impaired%20people%20cannot%20use%20classical%20maps%20but%20can%20learn%20to%20use%20tactile%20relief%20maps.%20These%20tactile%20maps%20are%20crucial%20at%20school%20to%20learn%20geography%20and%20history%20as%20well%20as%20the%20other%20students.%20They%20are%20produced%20manually%20by%20professional%20transcriptors%20in%20a%20very%20long%20and%20costly%20process.%20A%20platform%20able%20to%20generate%20tactile%20maps%20from%20maps%20scanned%20from%20geography%20textbooks%20could%20be%20extremely%20useful%20to%20these%20transcriptors%2C%20to%20fasten%20their%20production.%20As%20a%20first%20step%20towards%20such%20a%20platform%2C%20this%20paper%20proposes%20a%20method%20to%20infer%20the%20scale%20and%20the%20content%20of%20the%20map%20from%20its%20image.%20We%20used%20convolutional%20neural%20networks%20trained%20with%20a%20few%20hundred%20maps%20from%20French%20geography%20textbooks%2C%20and%20the%20results%20show%20promising%20results%20to%20infer%20labels%20about%20the%20content%20of%20the%20map%20%28e.g.%20%5C%22there%20are%20roads%2C%20cities%20and%20administrative%20boundaries%5C%22%29%2C%20and%20to%20infer%20the%20extent%20of%20the%20map%20%28e.g.%20a%20map%20of%20France%20or%20of%20Europe%29.%22%2C%22date%22%3A%222020-08%22%2C%22proceedingsTitle%22%3A%22ISPRS%20Congress%202020%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.5194%5C%2Fisprs-archives-XLIII-B4-2020-17-2020%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fhal.archives-ouvertes.fr%5C%2Fhal-02873414%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A43%3A42Z%22%7D%7D%2C%7B%22key%22%3A%22MWDTNQBD%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhou%20et%20al.%22%2C%22parsedDate%22%3A%222018%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EZhou%2C%20X.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.1805.10402%27%3EDeep%20Convolutional%20Neural%20Networks%20for%20Map-Type%20Classification%3C%5C%2Fa%3E.%202018%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Deep%20Convolutional%20Neural%20Networks%20for%20Map-Type%20Classification%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiran%22%2C%22lastName%22%3A%22Zhou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenwen%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Samantha%22%2C%22lastName%22%3A%22Arundel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jun%22%2C%22lastName%22%3A%22Liu%22%7D%5D%2C%22abstractNote%22%3A%22Maps%20are%20an%20important%20medium%20that%20enable%20people%20to%20comprehensively%20understand%20the%20configuration%20of%20cultural%20activities%20and%20natural%20elements%20over%20different%20times%20and%20places.%20Although%20massive%20maps%20are%20available%20in%20the%20digital%20era%2C%20how%20to%20effectively%20and%20accurately%20access%20the%20required%20map%20remains%20a%20challenge%20today.%20Previous%20works%20partially%20related%20to%20map-type%20classification%20mainly%20focused%20on%20map%20comparison%20and%20map%20matching%20at%20the%20local%20scale.%20The%20features%20derived%20from%20local%20map%20areas%20might%20be%20insufficient%20to%20characterize%20map%20content.%20To%20facilitate%20establishing%20an%20automatic%20approach%20for%20accessing%20the%20needed%20map%2C%20this%20paper%20reports%20our%20investigation%20into%20using%20deep%20learning%20techniques%20to%20recognize%20seven%20types%20of%20map%2C%20including%20topographic%20map%2C%20terrain%20map%2C%20physical%20map%2C%20urban%20scene%20map%2C%20the%20National%20Map%2C%203D%20map%2C%20nighttime%20map%2C%20orthophoto%20map%2C%20and%20land%20cover%20classification%20map.%20Experimental%20results%20show%20that%20the%20state-of-the-art%20deep%20convolutional%20neural%20networks%20can%20support%20automatic%20map-type%20classification.%20Additionally%2C%20the%20classification%20accuracy%20varies%20according%20to%20different%20map-types.%20We%20hope%20our%20work%20can%20contribute%20to%20the%20implementation%20of%20deep%20learning%20techniques%20in%20cartographical%20community%20and%20advance%20the%20progress%20of%20Geographical%20Artificial%20Intelligence%20%28GeoAI%29.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22%22%2C%22date%22%3A%222018%22%2C%22DOI%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.1805.10402%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A56%3A45Z%22%7D%7D%5D%7D
Touya, G. et al. Inferring the scale and content of a map using deep learning. 2020
Zhou, X. et al. Deep Convolutional Neural Networks for Map-Type Classification. 2018
Design Analysis
5447768
design analysis
1
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22WAX6L5VA%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Xi%20et%20al.%22%2C%22parsedDate%22%3A%222023-09-03%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EXi%2C%20D.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2172081%27%3EResearch%20on%20map%20emotional%20semantics%20using%20deep%20learning%20approach%3C%5C%2Fa%3E.%202023%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Research%20on%20map%20emotional%20semantics%20using%20deep%20learning%20approach%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Daping%22%2C%22lastName%22%3A%22Xi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xini%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lin%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nai%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yanzhu%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Han%22%2C%22lastName%22%3A%22Jiang%22%7D%5D%2C%22abstractNote%22%3A%22The%20main%20purpose%20of%20the%20research%20on%20map%20emotional%20semantics%20is%20to%20describe%20and%20express%20the%20emotional%20responses%20caused%20by%20people%20observing%20images%20through%20computer%20technology.%20Nowadays%2C%20map%20application%20scenarios%20tend%20to%20be%20diversified%2C%20and%20the%20increasing%20demand%20for%20emotional%20information%20of%20map%20users%20bring%20new%20challenges%20for%20cartography.%20However%2C%20the%20lack%20of%20evaluation%20of%20emotions%20in%20the%20traditional%20map%20drawing%20process%20makes%20it%20difficult%20for%20the%20resulting%20maps%20to%20reach%20emotional%20resonance%20with%20map%20users.%20The%20core%20of%20solving%20this%20problem%20is%20to%20quantify%20the%20emotional%20semantics%20of%20maps%2C%20it%20can%20help%20mapmakers%20to%20better%20understand%20map%20emotions%20and%20improve%20user%20satisfaction.%20This%20paper%20aims%20to%20perform%20the%20quantification%20of%20map%20emotional%20semantics%20by%20applying%20transfer%20learning%20methods%20and%20the%20efficient%20computational%20power%20of%20convolutional%20neural%20networks%20%28CNN%29%20to%20establish%20the%20correspondence%20between%20visual%20features%20and%20emotions.%20The%20main%20contributions%20of%20this%20paper%20are%20as%20follows%3A%20%281%29%20a%20Map%20Sentiment%20Dataset%20containing%20five%20discrete%20emotion%20categories%3B%20%282%29%20three%20different%20CNNs%20%28VGG16%2C%20VGG19%2C%20and%20InceptionV3%29%20are%20applied%20for%20map%20sentiment%20classification%20task%20and%20evaluated%20by%20accuracy%20performance%3B%20%283%29%20six%20different%20parameter%20combinations%20to%20conduct%20experiments%20that%20would%20determine%20the%20best%20combination%20of%20learning%20rate%20and%20batch%20size%3B%20and%20%284%29%20the%20analysis%20of%20visual%20variables%20that%20affect%20the%20sentiment%20of%20a%20map%20according%20to%20the%20chart%20and%20visualization%20results.%20The%20experimental%20results%20reveal%20that%20the%20proposed%20method%20has%20good%20accuracy%20performance%20%28around%2088%25%29%20and%20that%20the%20emotional%20semantics%20of%20maps%20have%20some%20general%20rules.%20A%20Map%20Sentiment%20Dataset%20with%20five%20discrete%20emotions%20is%20constructedMap%20emotional%20semantics%20are%20classified%20by%20deep%20learning%20approachesVisual%20variables%20Influencing%20map%20sentiment%20are%20analyzed.%20A%20Map%20Sentiment%20Dataset%20with%20five%20discrete%20emotions%20is%20constructed%20Map%20emotional%20semantics%20are%20classified%20by%20deep%20learning%20approaches%20Visual%20variables%20Influencing%20map%20sentiment%20are%20analyzed.%22%2C%22date%22%3A%222023-09-03%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2023.2172081%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2172081%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-18T17%3A10%3A23Z%22%7D%7D%2C%7B%22key%22%3A%22XZ3U99QR%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Keskin%20and%20Kettunen%22%2C%22parsedDate%22%3A%222023-05-04%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EKeskin%2C%20M.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F23729333.2022.2150379%27%3EPotential%20of%20eye-tracking%20for%20interactive%20geovisual%20exploration%20aided%20by%20machine%20learning%3C%5C%2Fa%3E.%202023%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Potential%20of%20eye-tracking%20for%20interactive%20geovisual%20exploration%20aided%20by%20machine%20learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Merve%22%2C%22lastName%22%3A%22Keskin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pyry%22%2C%22lastName%22%3A%22Kettunen%22%7D%5D%2C%22abstractNote%22%3A%22This%20review%20article%20collects%20knowledge%20on%20the%20use%20of%20eye-tracking%20and%20machine%20learning%20methods%20for%20application%20in%20automated%20and%20interactive%20geovisualization%20systems.%20Our%20focus%20is%20on%20exploratory%20reading%20of%20geovisualizations%20%28abbr.%20geoexploration%29%20and%20on%20machine%20learning%20tools%20for%20exploring%20vector%20geospatial%20data.%20We%20particularly%20consider%20geospatial%20data%20that%20is%20unlabeled%2C%20confusing%20or%20unknown%20to%20the%20user.%20The%20contribution%20of%20the%20article%20is%20in%20%28i%29%20defining%20principles%20and%20requirements%20for%20enabling%20user%20interaction%20with%20the%20geovisualizations%20that%20learn%20from%20and%20adapt%20to%20user%20behavior%2C%20and%20%28ii%29%20reviewing%20the%20use%20of%20eye%20tracking%20and%20machine%20learning%20to%20design%20gaze-aware%20interactive%20map%20systems%20%28GAIMS%29.%20In%20this%20context%2C%20we%20review%20literature%20on%20%28i%29%20human-computer%20interaction%20%28HCI%29%20design%20for%20exploring%20geospatial%20data%2C%20%28ii%29%20eye%20tracking%20for%20cartographic%20user%20experience%2C%20and%20%28iii%29%20machine%20learning%20applied%20to%20vector%20geospatial%20data.%20The%20review%20indicates%20that%20combining%20eye%20tracking%20and%20machine%20learning%20is%20promising%20in%20terms%20of%20assisting%20geoexploration.%20However%2C%20more%20research%20is%20needed%20on%20eye%20tracking%20for%20interaction%20and%20personalization%20of%20cartographic%5C%2Fmap%20interfaces%20as%20well%20as%20on%20machine%20learning%20for%20detection%20of%20geometries%20in%20vector%20format.%22%2C%22date%22%3A%222023-05-04%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F23729333.2022.2150379%22%2C%22ISSN%22%3A%222372-9333%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F23729333.2022.2150379%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-18T17%3A10%3A43Z%22%7D%7D%5D%7D
Xi, D. et al. Research on map emotional semantics using deep learning approach. 2023
Keskin, M. et al. Potential of eye-tracking for interactive geovisual exploration aided by machine learning. 2023
Similarity Search
5447768
similarity search
1
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%228KQBF4VN%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Klasen%20et%20al.%22%2C%22parsedDate%22%3A%222023-05-04%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EKlasen%2C%20V.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F23729333.2022.2156316%27%3EHow%20we%20see%20time%20%5Cu2013%20the%20evolution%20and%20current%20state%20of%20visualizations%20of%20temporal%20data%3C%5C%2Fa%3E.%202023%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22How%20we%20see%20time%20%5Cu2013%20the%20evolution%20and%20current%20state%20of%20visualizations%20of%20temporal%20data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Verena%22%2C%22lastName%22%3A%22Klasen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Edyta%20P.%22%2C%22lastName%22%3A%22Bogucka%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Liqiu%22%2C%22lastName%22%3A%22Meng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jukka%20M.%22%2C%22lastName%22%3A%22Krisp%22%7D%5D%2C%22abstractNote%22%3A%22Time%2C%20much%20like%20space%2C%20has%20always%20influenced%20the%20human%20experience%20due%20to%20its%20ubiquity.%20Yet%2C%20how%20we%20have%20communicated%20temporal%20information%20graphically%20throughout%20our%20history%2C%20is%20still%20inadequately%20studied.%20How%20does%20our%20image%20of%20time%20and%20temporal%20events%20evolve%20as%20the%20human%20world%20continuously%20transforms%20into%20a%20globally%20more%20and%20more%20synchronized%20community%3F%20Within%20this%20overview%20paper%2C%20we%20elaborate%20on%20these%20questions%2C%20we%20analyze%20visualizations%20of%20time%20and%20temporal%20data%20from%20a%20variety%20of%20sources%20connected%20to%20exploratory%20data%20analysis.%20We%20assign%20codes%20and%20cluster%20the%20visualizations%20based%20on%20their%20graphical%20properties.%20The%20result%20gives%20an%20overview%20of%20different%20visual%20structures%20apparent%20in%20graphic%20representations%20of%20time.%22%2C%22date%22%3A%222023-05-04%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F23729333.2022.2156316%22%2C%22ISSN%22%3A%222372-9333%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F23729333.2022.2156316%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-18T17%3A14%3A05Z%22%7D%7D%2C%7B%22key%22%3A%22F6X45ECS%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Guo%20et%20al.%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EGuo%2C%20D.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1111%5C%2Ftgis.12915%27%3EDeepSSN%3A%20A%20deep%20convolutional%20neural%20network%20to%20assess%20spatial%20scene%20similarity%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22DeepSSN%3A%20A%20deep%20convolutional%20neural%20network%20to%20assess%20spatial%20scene%20similarity%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Danhuai%22%2C%22lastName%22%3A%22Guo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shiyin%22%2C%22lastName%22%3A%22Ge%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shu%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Song%22%2C%22lastName%22%3A%22Gao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ran%22%2C%22lastName%22%3A%22Tao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yangang%22%2C%22lastName%22%3A%22Wang%22%7D%5D%2C%22abstractNote%22%3A%22Spatial-query-by-sketch%20is%20an%20intuitive%20tool%20to%20explore%20human%20spatial%20knowledge%20about%20geographic%20environments%20and%20to%20support%20communication%20with%20scene%20database%20queries.%20However%2C%20traditional%20sketch-based%20spatial%20search%20methods%20perform%20inadequately%20due%20to%20their%20inability%20to%20find%20hidden%20multiscale%20map%20features%20from%20mental%20sketches.%20In%20this%20research%2C%20we%20propose%20a%20deep%20convolutional%20neural%20network%2C%20namely%20the%20Deep%20Spatial%20Scene%20Network%20%28DeepSSN%29%2C%20to%20better%20assess%20the%20spatial%20scene%20similarity.%20In%20DeepSSN%2C%20a%20triplet%20loss%20function%20is%20designed%20as%20a%20comprehensive%20distance%20metric%20to%20support%20the%20similarity%20assessment.%20A%20positive%20and%20negative%20example%20mining%20strategy%20is%20designed%20to%20ensure%20a%20consistently%20increasing%20distinction%20of%20triplets%20during%20the%20training%20process.%20Moreover%2C%20we%20develop%20a%20prototype%20spatial%20scene%20search%20system%20using%20the%20proposed%20DeepSSN%2C%20in%20which%20the%20users%20input%20spatial%20queries%20via%20sketch%20maps%20and%20the%20system%20can%20automatically%20augment%20the%20sketch%20training%20data.%20The%20proposed%20model%20is%20validated%20using%20multisource%20conflated%20map%20data%20including%20131%2C300%20labeled%20scene%20samples%20after%20data%20augmentation.%20The%20empirical%20results%20demonstrate%20that%20the%20DeepSSN%20outperforms%20baseline%20methods%20including%20k-nearest%20neighbors%2C%20the%20multilayer%20perceptron%2C%20AlexNet%2C%20DenseNet%2C%20and%20ResNet%20using%20mean%20reciprocal%20rank%20and%20precision%20metrics.%20This%20research%20advances%20geographic%20information%20retrieval%20studies%20by%20introducing%20a%20novel%20deep%20learning%20method%20tailored%20to%20spatial%20scene%20queries.%22%2C%22date%22%3A%222022%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1111%5C%2Ftgis.12915%22%2C%22ISSN%22%3A%221467-9671%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1111%5C%2Ftgis.12915%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A57%3A58Z%22%7D%7D%2C%7B%22key%22%3A%227PME8L7S%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Dobesova%22%2C%22parsedDate%22%3A%222020-06%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EDobesova%2C%20Z.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F9%5C%2F6%5C%2F406%27%3EExperiment%20in%20Finding%20Look-Alike%20European%20Cities%20Using%20Urban%20Atlas%20Data%3C%5C%2Fa%3E.%202020%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Experiment%20in%20Finding%20Look-Alike%20European%20Cities%20Using%20Urban%20Atlas%20Data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zdena%22%2C%22lastName%22%3A%22Dobesova%22%7D%5D%2C%22abstractNote%22%3A%22The%20integration%20of%20geography%20and%20machine%20learning%20can%20produce%20novel%20approaches%20in%20addressing%20a%20variety%20of%20problems%20occurring%20in%20natural%20and%20human%20environments.%20This%20article%20presents%20an%20experiment%20that%20identifies%20cities%20that%20are%20similar%20according%20to%20their%20land%20use%20data.%20The%20article%20presents%20interesting%20preliminary%20experiments%20with%20screenshots%20of%20maps%20from%20the%20Czech%20map%20portal.%20After%20successfully%20working%20with%20the%20map%20samples%2C%20the%20study%20focuses%20on%20identifying%20cities%20with%20similar%20land%20use%20structures.%20The%20Copernicus%20European%20Urban%20Atlas%202012%20was%20used%20as%20a%20source%20dataset%20%28data%20valid%20years%202015%5Cu20132018%29.%20The%20Urban%20Atlas%20freely%20offers%20land%20use%20datasets%20of%20nearly%20800%20functional%20urban%20areas%20in%20Europe.%20To%20search%20for%20similar%20cities%2C%20a%20set%20of%20maps%20detailing%20land%20use%20in%20European%20cities%20was%20prepared%20in%20ArcGIS.%20A%20vector%20of%20image%20descriptors%20for%20each%20map%20was%20subsequently%20produced%20using%20a%20pre-trained%20neural%20network%2C%20known%20as%20Painters%2C%20in%20Orange%20software.%20As%20a%20typical%20data%20mining%20task%2C%20the%20nearest%20neighbor%20function%20analyzes%20these%20descriptors%20according%20to%20land%20use%20patterns%20to%20find%20look-alike%20cities.%20Example%20city%20pairs%20based%20on%20land%20use%20are%20also%20presented%20in%20this%20article.%20The%20research%20question%20is%20whether%20the%20existing%20pre-trained%20neural%20network%20outside%20cartography%20is%20applicable%20for%20categorization%20of%20some%20thematic%20maps%20with%20data%20mining%20tasks%20such%20as%20clustering%2C%20similarity%2C%20and%20finding%20the%20nearest%20neighbor.%20The%20article%5Cu2019s%20contribution%20is%20a%20presentation%20of%20one%20possible%20method%20to%20find%20cities%20similar%20to%20each%20other%20according%20to%20their%20land%20use%20patterns%2C%20structures%2C%20and%20shapes.%20Some%20of%20the%20findings%20were%20surprising%2C%20and%20without%20machine%20learning%2C%20could%20not%20have%20been%20evident%20through%20human%20visual%20investigation%20alone.%22%2C%22date%22%3A%222020%5C%2F6%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi9060406%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F9%5C%2F6%5C%2F406%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A48%3A58Z%22%7D%7D%5D%7D
Klasen, V. et al. How we see time – the evolution and current state of visualizations of temporal data. 2023
Guo, D. et al. DeepSSN: A deep convolutional neural network to assess spatial scene similarity. 2022
Dobesova, Z. Experiment in Finding Look-Alike European Cities Using Urban Atlas Data. 2020
Text-to-Map
5447768
text-to-map
1
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22SVAW87D7%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhang%20et%20al.%22%2C%22parsedDate%22%3A%222024-07-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EZhang%2C%20Y.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1569843224003303%27%3EGeoGPT%3A%20An%20assistant%20for%20understanding%20and%20processing%20geospatial%20tasks%3C%5C%2Fa%3E.%202024%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22GeoGPT%3A%20An%20assistant%20for%20understanding%20and%20processing%20geospatial%20tasks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yifan%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cheng%22%2C%22lastName%22%3A%22Wei%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhengting%22%2C%22lastName%22%3A%22He%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenhao%22%2C%22lastName%22%3A%22Yu%22%7D%5D%2C%22abstractNote%22%3A%22Decision-makers%20in%20GIS%20often%20need%20to%20combine%20multiple%20spatial%20algorithms%20and%20operations%20to%20solve%20geospatial%20tasks.%20While%20professionals%20can%20understand%20and%20solve%20these%20tasks%20by%20using%20GIS%20tools%20sequentially%2C%20developing%20workflows%20for%20various%20tasks%20can%20be%20inefficient%2C%20as%20even%20slight%20differences%20in%20tasks%20require%20corresponding%20adjustments%20in%20the%20workflow.%20Recently%2C%20large%20language%20models%20%28e.g.%2C%20ChatGPT%29%20presented%20a%20strong%20performance%20in%20semantic%20understanding%20and%20reasoning.%20Especially%2C%20AutoGPT%20can%20further%20extend%20the%20capabilities%20of%20large%20language%20models%20%28LLMs%29%20by%20automatically%20reasoning%20and%20calling%20externally%20defined%20tools.%20Inspired%20by%20these%20studies%2C%20we%20attempt%20to%20increase%20the%20efficiency%20of%20developing%20workflows%20for%20handling%20geoprocessing%20tasks%20by%20integrating%20the%20semantic%20understanding%20ability%20inherent%20in%20LLMs%20with%20mature%20tools%20within%20the%20GIS%20community.%20Specifically%2C%20we%20develop%20a%20new%20framework%20called%20GeoGPT%20that%20can%20conduct%20geospatial%20data%20collection%2C%20processing%2C%20and%20analysis%20in%20an%20autonomous%20manner.%20In%20this%20framework%2C%20a%20LLM%20is%20used%20to%20understand%20the%20demands%20of%20users%2C%20and%20then%20think%2C%20plan%2C%20and%20execute%20defined%20GIS%20tools%20sequentially%20to%20output%20final%20effective%20results.%20In%20this%20process%2C%20our%20framework%20is%20user-friendly%2C%20accepting%20natural%20language%20instructions%20as%20input%20and%20adapting%20to%20various%20geospatial%20tasks%2C%20which%20can%20serve%20as%20an%20assistant%20for%20GIS%20professionals.%20A%20systemic%20evaluation%20and%20several%20cases%2C%20including%20geospatial%20data%20crawling%2C%20spatial%20query%2C%20facility%20siting%2C%20and%20mapping%2C%20validate%20the%20effectiveness%20of%20our%20framework.%20Though%20limited%20cases%20are%20presented%20in%20this%20paper%2C%20GeoGPT%20can%20be%20further%20extended%20to%20various%20tasks%20by%20equipping%20with%20more%20GIS%20tools%2C%20and%20we%20think%20the%20paradigm%20of%20%5Cu201cfoundational%20plus%20professional%5Cu201d%20implied%20in%20GeoGPT%20provides%20an%20effective%20way%20to%20develop%20next-generation%20GIS%20in%20this%20era%20of%20large%20foundation%20models.%22%2C%22date%22%3A%222024-07-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.jag.2024.103976%22%2C%22ISSN%22%3A%221569-8432%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1569843224003303%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-11-15T18%3A46%3A54Z%22%7D%7D%2C%7B%22key%22%3A%22PH8JDHN3%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3E%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Farxiv.org%5C%2Fhtml%5C%2F2401.07314v2%27%3EMapGPT%3A%20Map-Guided%20Prompting%20with%20Adaptive%20Path%20Planning%20for%20Vision-and-Language%20Navigation%3C%5C%2Fa%3E.%20n.d.%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22webpage%22%2C%22title%22%3A%22MapGPT%3A%20Map-Guided%20Prompting%20with%20Adaptive%20Path%20Planning%20for%20Vision-and-Language%20Navigation%22%2C%22creators%22%3A%5B%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Farxiv.org%5C%2Fhtml%5C%2F2401.07314v2%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T17%3A56%3A12Z%22%7D%7D%5D%7D
Zhang, Y. et al. GeoGPT: An assistant for understanding and processing geospatial tasks. 2024
Neural Rendering (Relief Shading)
5447768
relief shading
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22M9V5T4NH%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20et%20al.%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ELi%2C%20S.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F7%5C%2F374%27%3EGeneration%20Method%20for%20Shaded%20Relief%20Based%20on%20Conditional%20Generative%20Adversarial%20Nets%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Generation%20Method%20for%20Shaded%20Relief%20Based%20on%20Conditional%20Generative%20Adversarial%20Nets%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shaomei%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guangzhi%22%2C%22lastName%22%3A%22Yin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jingzhen%22%2C%22lastName%22%3A%22Ma%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bowei%22%2C%22lastName%22%3A%22Wen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhao%22%2C%22lastName%22%3A%22Zhou%22%7D%5D%2C%22abstractNote%22%3A%22Relief%20shading%20is%20the%20primary%20method%20for%20effectively%20representing%20three-dimensional%20terrain%20on%20a%20two-dimensional%20plane.%20Despite%20its%20expressiveness%2C%20manual%20relief%20shading%20is%20difficult%20and%20time-consuming.%20In%20contrast%2C%20although%20analytical%20relief%20shading%20is%20fast%20and%20efficient%2C%20the%20visual%20effect%20is%20quite%20different%20from%20that%20of%20manual%20relief%20shading%20due%20to%20the%20low%20degree%20of%20terrain%20generalisation%2C%20inability%20to%20adjust%20local%20illumination%2C%20and%20difficulty%20in%20exaggerating%20and%20selective%20representation.%20We%20introduce%20deep%20learning%20technology%20to%20propose%20a%20generation%20method%20for%20shaded%20relief%20based%20on%20conditional%20generative%20adversarial%20nets.%20This%20method%20takes%20the%20set%20of%20manual%20relief%20shading-digital%20elevation%20model%20%28DEM%29%20slices%20as%20a%20priori%20knowledge%2C%20optimises%20network%20parameters%20through%20a%20continuous%20game%20of%20%5Cu201cgeneration-discrimination%5Cu201d%2C%20and%20produces%20a%20shaded%20relief%20map%20of%20any%20region%20based%20on%20the%20DEM.%20Test%20results%20indicate%20that%20the%20proposed%20method%20retains%20the%20advantages%20of%20manual%20relief%20shading%20and%20can%20quickly%20generate%20shaded%20relief%20with%20quality%20and%20artistic%20style%20similar%20to%20those%20of%20manual%20shading.%20Compared%20with%20other%20networks%2C%20the%20shaded%20relief%20generated%20by%20the%20proposed%20method%20not%20only%20depicts%20the%20terrain%20clearly%20but%20also%20achieves%20a%20good%20generalisation%20effect.%20Moreover%2C%20through%20the%20use%20of%20an%20adversarial%20structure%2C%20the%20network%20demonstrates%20stronger%20cross-scale%20generation%20ability.%22%2C%22date%22%3A%222022%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi11070374%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F7%5C%2F374%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A05%3A39Z%22%7D%7D%2C%7B%22key%22%3A%22Y9ULC39N%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Jenny%20et%20al.%22%2C%22parsedDate%22%3A%222021-02%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EJenny%2C%20B.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9222263%27%3ECartographic%20Relief%20Shading%20with%20Neural%20Networks%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Cartographic%20Relief%20Shading%20with%20Neural%20Networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bernhard%22%2C%22lastName%22%3A%22Jenny%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Magnus%22%2C%22lastName%22%3A%22Heitzler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dilpreet%22%2C%22lastName%22%3A%22Singh%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Marianna%22%2C%22lastName%22%3A%22Farmakis-Serebryakova%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jeffery%20Chieh%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22Shaded%20relief%20is%20an%20effective%20method%20for%20visualising%20terrain%20on%20topographic%20maps%2C%20especially%20when%20the%20direction%20of%20illumination%20is%20adapted%20locally%20to%20emphasise%20individual%20terrain%20features.%20However%2C%20digital%20shading%20algorithms%20are%20unable%20to%20fully%20match%20the%20expressiveness%20of%20hand-crafted%20masterpieces%2C%20which%20are%20created%20through%20a%20laborious%20process%20by%20highly%20specialised%20cartographers.%20We%20replicate%20hand-drawn%20relief%20shading%20using%20U-Net%20neural%20networks.%20The%20deep%20neural%20networks%20are%20trained%20with%20manual%20shaded%20relief%20images%20of%20the%20Swiss%20topographic%20map%20series%20and%20terrain%20models%20of%20the%20same%20area.%20The%20networks%20generate%20shaded%20relief%20that%20closely%20resemble%20hand-drawn%20shaded%20relief%20art.%20The%20networks%20learn%20essential%20design%20principles%20from%20manual%20relief%20shading%20such%20as%20removing%20unnecessary%20terrain%20details%2C%20locally%20adjusting%20the%20illumination%20direction%20to%20accentuate%20individual%20terrain%20features%2C%20and%20varying%20brightness%20to%20emphasise%20larger%20landforms.%20Neural%20network%20shadings%20are%20generated%20from%20digital%20elevation%20models%20in%20a%20few%20seconds%2C%20and%20a%20study%20with%2018%20relief%20shading%20experts%20found%20that%20they%20are%20of%20high%20quality.%22%2C%22date%22%3A%222021-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FTVCG.2020.3030456%22%2C%22ISSN%22%3A%221941-0506%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9222263%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A19%3A32Z%22%7D%7D%5D%7D
Jenny, B. et al. Cartographic Relief Shading with Neural Networks. 2021
Style Transfer
5447768
style transfer
1
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%222PK3LYSR%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wu%20and%20Biljecki%22%2C%22parsedDate%22%3A%222022-07-03%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EWu%2C%20A.N.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2022.2041643%27%3EGANmapper%3A%20geographical%20data%20translation%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22GANmapper%3A%20geographical%20data%20translation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Abraham%20Noah%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Filip%22%2C%22lastName%22%3A%22Biljecki%22%7D%5D%2C%22abstractNote%22%3A%22We%20present%20a%20new%20method%20to%20create%20spatial%20data%20using%20a%20generative%20adversarial%20network%20%28GAN%29.%20Our%20contribution%20uses%20coarse%20and%20widely%20available%20geospatial%20data%20to%20create%20maps%20of%20less%20available%20features%20at%20the%20finer%20scale%20in%20the%20built%20environment%2C%20bypassing%20their%20traditional%20acquisition%20techniques%20%28e.g.%20satellite%20imagery%20or%20land%20surveying%29.%20In%20the%20work%2C%20we%20employ%20land%20use%20data%20and%20road%20networks%20as%20input%20to%20generate%20building%20footprints%20and%20conduct%20experiments%20in%209%20cities%20around%20the%20world.%20The%20method%2C%20which%20we%20implement%20in%20a%20tool%20we%20release%20openly%2C%20enables%20the%20translation%20of%20one%20geospatial%20dataset%20to%20another%20with%20high%20fidelity%20and%20morphological%20accuracy.%20It%20may%20be%20especially%20useful%20in%20locations%20missing%20detailed%20and%20high-resolution%20data%20and%20those%20that%20are%20mapped%20with%20uncertain%20or%20heterogeneous%20quality%2C%20such%20as%20much%20of%20OpenStreetMap.%20The%20quality%20of%20the%20results%20is%20influenced%20by%20the%20urban%20form%20and%20scale.%20In%20most%20cases%2C%20the%20experiments%20suggest%20promising%20performance%20as%20the%20method%20tends%20to%20truthfully%20indicate%20the%20locations%2C%20amount%2C%20and%20shape%20of%20buildings.%20The%20work%20has%20the%20potential%20to%20support%20several%20applications%2C%20such%20as%20energy%2C%20climate%2C%20and%20urban%20morphology%20studies%20in%20areas%20previously%20lacking%20required%20data%20or%20inpainting%20geospatial%20data%20in%20regions%20with%20incomplete%20data.%22%2C%22date%22%3A%222022-07-03%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2022.2041643%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2022.2041643%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A45%3A47Z%22%7D%7D%2C%7B%22key%22%3A%22VENS4BLI%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ye%20et%20al.%22%2C%22parsedDate%22%3A%222022-03-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EYe%2C%20X.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1177%5C%2F23998083211023516%27%3EMasterplanGAN%3A%20Facilitating%20the%20smart%20rendering%20of%20urban%20master%20plans%20via%20generative%20adversarial%20networks%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22MasterplanGAN%3A%20Facilitating%20the%20smart%20rendering%20of%20urban%20master%20plans%20via%20generative%20adversarial%20networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xinyue%22%2C%22lastName%22%3A%22Ye%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiaxin%22%2C%22lastName%22%3A%22Du%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yu%22%2C%22lastName%22%3A%22Ye%22%7D%5D%2C%22abstractNote%22%3A%22This%20study%20proposes%20a%20prototype%20for%20the%20smart%20rendering%20of%20urban%20master%20plans%20via%20artificial%20intelligence%20algorithms%2C%20a%20process%20which%20is%20time-consuming%20and%20relies%20on%20professionals%3F%20experience.%20With%20the%20help%20of%20crowdsourced%20data%20and%20generative%20adversarial%20networks%20%28GAN%29%2C%20a%20generation%20model%20was%20trained%20to%20provide%20colorful%20rendering%20of%20master%20plans%20similar%20to%20those%20produced%20by%20experienced%20urban%20designers.%20Approximately%205000%20master%20plans%20from%20Pinterest%20were%20processed%20and%20CycleGAN%20was%20applied%20as%20the%20core%20algorithm%20to%20build%20this%20model%2C%20the%20so-called%20MasterplanGAN.%20Using%20the%20uncolored%20input%20design%20files%20in%20an%20AutoCAD%20format%2C%20the%20MasterplanGAN%20can%20provide%20master%20plan%20renderings%20within%20a%20few%20seconds.%20The%20validation%20of%20the%20generated%20results%20was%20achieved%20using%20quantitative%20and%20qualitative%20judgments.%20The%20achievements%20of%20this%20study%20contribute%20to%20the%20development%20of%20automatic%20generation%20of%20previously%20subjective%20and%20experience-oriented%20processes%2C%20which%20can%20serve%20as%20a%20useful%20tool%20for%20urban%20designers%20and%20planners%20to%20save%20time%20in%20real%20projects.%20It%20also%20contributes%20to%20push%20the%20methodological%20boundaries%20of%20urban%20design%20by%20addressing%20urban%20design%20requirements%20with%20new%20urban%20data%20and%20new%20techniques.%20This%20initial%20exploration%20indicates%20that%20a%20large%20but%20clear%20picture%20of%20computational%20urban%20design%20can%20be%20presented%2C%20integrating%20scientific%20thinking%2C%20design%2C%20and%20computer%20techniques.%22%2C%22date%22%3A%222022-03-01%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1177%5C%2F23998083211023516%22%2C%22ISSN%22%3A%222399-8083%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1177%5C%2F23998083211023516%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A54%3A49Z%22%7D%7D%2C%7B%22key%22%3A%22JWJRRJVW%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Christophe%20et%20al.%22%2C%22parsedDate%22%3A%222022-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EChristophe%2C%20S.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F23729333.2022.2031554%27%3ENeural%20map%20style%20transfer%20exploration%20with%20GANs%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Neural%20map%20style%20transfer%20exploration%20with%20GANs%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sidonie%22%2C%22lastName%22%3A%22Christophe%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Samuel%22%2C%22lastName%22%3A%22Mermet%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Morgan%22%2C%22lastName%22%3A%22Laurent%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guillaume%22%2C%22lastName%22%3A%22Touya%22%7D%5D%2C%22abstractNote%22%3A%22Neural%20Style%20Transfer%20is%20a%20Computer%20Vision%20topic%20intending%20to%20transfer%20the%20visual%20appearance%20or%20the%20style%20of%20images%20to%20other%20images.%20Developments%20in%20deep%20learning%20nicely%20generate%20stylized%20images%20from%20texture-based%20examples%20or%20transfer%20the%20style%20of%20a%20photograph%20to%20another%20one.%20In%20map%20design%2C%20the%20style%20is%20a%20multi-dimensional%20complex%20problem%20related%20to%20recognizable%20visual%20salient%20features%20and%20topological%20arrangements%2C%20supporting%20the%20description%20of%20geographic%20spaces%20at%20a%20specific%20scale.%20The%20map%20style%20transfer%20is%20still%20at%20stake%20to%20generate%20a%20diversity%20of%20possible%20new%20styles%20to%20render%20geographical%20features.%20Generative%20adversarial%20Networks%20%28GANs%29%20techniques%2C%20well%20supporting%20image-to-image%20translation%20tasks%2C%20offer%20new%20perspectives%20for%20map%20style%20transfer.%20We%20propose%20to%20use%20accessible%20GAN%20architectures%2C%20in%20order%20to%20experiment%20and%20assess%20neural%20map%20style%20transfer%20to%20ortho-images%2C%20while%20using%20different%20map%20designs%20of%20various%20geographic%20spaces%2C%20from%20simple-styled%20%28Plan%20maps%29%20to%20complex-styled%20%28old%20Cassini%2C%20Etat-Major%2C%20or%20Scan50%20B%26W%29.%20This%20transfer%20task%20and%20our%20global%20protocol%20are%20presented%2C%20including%20the%20sampling%20grid%2C%20the%20training%20and%20test%20of%20Pix2Pix%20and%20CycleGAN%20models%2C%20such%20as%20the%20perceptual%20assessment%20of%20the%20generated%20outputs.%20Promising%20results%20are%20discussed%2C%20opening%20research%20issues%20for%20neural%20map%20style%20transfer%20exploration%20with%20GANs.%22%2C%22date%22%3A%222022-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F23729333.2022.2031554%22%2C%22ISSN%22%3A%222372-9333%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F23729333.2022.2031554%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A47%3A00Z%22%7D%7D%2C%7B%22key%22%3A%22ZRW98K3T%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20et%20al.%22%2C%22parsedDate%22%3A%222021-11-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ELi%2C%20Z.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3486635.3491070%27%3ESynthetic%20Map%20Generation%20to%20Provide%20Unlimited%20Training%20Data%20for%20Historical%20Map%20Text%20Detection%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Synthetic%20Map%20Generation%20to%20Provide%20Unlimited%20Training%20Data%20for%20Historical%20Map%20Text%20Detection%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zekun%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Runyu%22%2C%22lastName%22%3A%22Guan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qianmu%22%2C%22lastName%22%3A%22Yu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yao-Yi%22%2C%22lastName%22%3A%22Chiang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Craig%20A.%22%2C%22lastName%22%3A%22Knoblock%22%7D%5D%2C%22abstractNote%22%3A%22Many%20historical%20map%20sheets%20are%20publicly%20available%20for%20studies%20that%20require%20long-term%20historical%20geographic%20data.%20The%20cartographic%20design%20of%20these%20maps%20includes%20a%20combination%20of%20map%20symbols%20and%20text%20labels.%20Automatically%20reading%20text%20labels%20from%20map%20images%20could%20greatly%20speed%20up%20the%20map%20interpretation%20and%20helps%20generate%20rich%20metadata%20describing%20the%20map%20content.%20Many%20text%20detection%20algorithms%20have%20been%20proposed%20to%20locate%20text%20regions%20in%20map%20images%20automatically%2C%20but%20most%20of%20the%20algorithms%20are%20trained%20on%20out-of-domain%20datasets%20%28e.g.%2C%20scenic%20images%29.%20Training%20data%20determines%20the%20quality%20of%20machine%20learning%20models%2C%20and%20manually%20annotating%20text%20regions%20in%20map%20images%20is%20labor-extensive%20and%20time-consuming.%20On%20the%20other%20hand%2C%20existing%20geographic%20data%20sources%2C%20such%20as%20Open-StreetMap%20%28OSM%29%2C%20contain%20machine-readable%20map%20layers%2C%20which%20allow%20us%20to%20separate%20out%20the%20text%20layer%20and%20obtain%20text%20label%20annotations%20easily.%20However%2C%20the%20cartographic%20styles%20between%20OSM%20map%20tiles%20and%20historical%20maps%20are%20significantly%20different.%20This%20paper%20proposes%20a%20method%20to%20automatically%20generate%20an%20unlimited%20amount%20of%20annotated%20historical%20map%20images%20for%20training%20text%20detection%20models.%20We%20use%20a%20style%20transfer%20model%20to%20convert%20contemporary%20map%20images%20into%20historical%20style%20and%20place%20text%20labels%20upon%20them.%20We%20show%20that%20the%20state-of-the-art%20text%20detection%20models%20%28e.g.%2C%20PSENet%29%20can%20benefit%20from%20the%20synthetic%20historical%20maps%20and%20achieve%20significant%20improvement%20for%20historical%20map%20text%20detection.%22%2C%22date%22%3A%22November%202%2C%202021%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%204th%20ACM%20SIGSPATIAL%20International%20Workshop%20on%20AI%20for%20Geographic%20Knowledge%20Discovery%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3486635.3491070%22%2C%22ISBN%22%3A%22978-1-4503-9120-7%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3486635.3491070%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A07%3A57Z%22%7D%7D%2C%7B%22key%22%3A%22GBW2ZIMN%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Chen%20et%20al.%22%2C%22parsedDate%22%3A%222021-05%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EChen%2C%20X.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9200723%27%3ESMAPGAN%3A%20Generative%20Adversarial%20Network-Based%20Semisupervised%20Styled%20Map%20Tile%20Generation%20Method%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22SMAPGAN%3A%20Generative%20Adversarial%20Network-Based%20Semisupervised%20Styled%20Map%20Tile%20Generation%20Method%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xu%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Songqiang%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tian%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bangguo%22%2C%22lastName%22%3A%22Yin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jian%22%2C%22lastName%22%3A%22Peng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiaoming%22%2C%22lastName%22%3A%22Mei%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haifeng%22%2C%22lastName%22%3A%22Li%22%7D%5D%2C%22abstractNote%22%3A%22Traditional%20online%20map%20tiles%2C%20which%20are%20widely%20used%20on%20the%20Internet%2C%20such%20as%20by%20Google%20Maps%20and%20Baidu%20Maps%2C%20are%20rendered%20from%20vector%20data.%20The%20timely%20updating%20of%20online%20map%20tiles%20from%20vector%20data%2C%20for%20which%20generation%20is%20time-consuming%2C%20is%20a%20difficult%20mission.%20Generating%20map%20tiles%20over%20time%20from%20remote%20sensing%20images%20is%20relatively%20simple%20and%20can%20be%20performed%20quickly%20without%20vector%20data.%20However%2C%20this%20approach%20used%20to%20be%20challenging%20or%20even%20impossible.%20Inspired%20by%20image-to-image%20translation%20%28img2img%29%20techniques%20based%20on%20generative%20adversarial%20networks%20%28GANs%29%2C%20we%20proposed%20a%20semisupervised%20generation%20of%20styled%20map%20tiles%20based%20on%20the%20GANs%20%28SMAPGAN%29%20model%20to%20generate%20styled%20map%20tiles%20directly%20from%20remote%20sensing%20images.%20In%20this%20model%2C%20we%20designed%20a%20semisupervised%20learning%20strategy%20to%20pretrain%20SMAPGAN%20on%20rich%20unpaired%20samples%20and%20fine-tune%20it%20on%20limited%20paired%20samples%20in%20reality.%20We%20also%20designed%20the%20image%20gradient%20L1%20loss%20and%20the%20image%20gradient%20structure%20loss%20to%20generate%20a%20styled%20map%20tile%20with%20global%20topological%20relationships%20and%20detailed%20edge%20curves%20for%20objects%2C%20which%20are%20important%20in%20cartography.%20Moreover%2C%20we%20proposed%20the%20edge%20structural%20similarity%20index%20%28ESSI%29%20as%20a%20metric%20to%20evaluate%20the%20quality%20of%20the%20topological%20consistency%20between%20the%20generated%20map%20tiles%20and%20ground%20truth.%20The%20experimental%20results%20show%20that%20SMAPGAN%20outperforms%20state-of-the-art%20%28SOTA%29%20works%20according%20to%20the%20mean%20squared%20error%2C%20the%20structural%20similarity%20index%2C%20and%20the%20ESSI.%20Also%2C%20SMAPGAN%20gained%20higher%20approval%20than%20SOTA%20in%20a%20human%20perceptual%20test%20on%20the%20visual%20realism%20of%20cartography.%20Our%20work%20shows%20that%20SMAPGAN%20is%20a%20new%20tool%20with%20excellent%20potential%20for%20producing%20styled%20map%20tiles.%20Our%20implementation%20of%20SMAPGAN%20is%20available%20at%20https%3A%5C%2F%5C%2Fgithub.com%5C%2Fimcsq%5C%2FSMAPGAN.%22%2C%22date%22%3A%222021-05%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FTGRS.2020.3021819%22%2C%22ISSN%22%3A%221558-0644%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9200723%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A20%3A08Z%22%7D%7D%2C%7B%22key%22%3A%22XZ5Q378R%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Bogucka%20and%20Meng%22%2C%22parsedDate%22%3A%222019-07-10%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EBogucka%2C%20E.P.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.proc-int-cartogr-assoc.net%5C%2F2%5C%2F9%5C%2F2019%5C%2F%27%3EProjecting%20emotions%20from%20artworks%20to%20maps%20using%20neural%20style%20transfer%3C%5C%2Fa%3E.%202019%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Projecting%20emotions%20from%20artworks%20to%20maps%20using%20neural%20style%20transfer%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Edyta%20P.%22%2C%22lastName%22%3A%22Bogucka%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Liqiu%22%2C%22lastName%22%3A%22Meng%22%7D%5D%2C%22abstractNote%22%3A%22%3Cp%3E%3Cstrong%20class%3D%5C%22journal-contentHeaderColor%5C%22%3EAbstract.%3C%5C%2Fstrong%3E%20Recent%20advances%20in%20deep%20learning%20have%20facilitated%20the%20exchange%20of%20styles%20and%20textures%20between%20input%20images%20to%20create%20unique%20synthesised%20outputs.%20This%20paper%20assesses%20the%20applicability%20of%20neural%20style%20transfer%20to%20cartography%20and%20evaluates%20to%20what%20degree%20emotions%20attached%20to%20input%20images%20can%20be%20preserved%20in%20maps%20co-created%20by%20human%20and%20algorithm.%20As%20a%20source%20of%20emotions%20we%20utilized%20personal%20paintings%20created%20during%20a%20workshop%20with%20international%20artists%20at%20the%20School%20of%20Machines%2C%20Making%20%26%20Make-Believe%20in%20August%202018.%20The%20neural%20style%20transfer%20was%20used%20as%20a%20tool%20to%20transfer%20the%20characteristics%20of%20the%20artworks%20onto%20the%20map.%20Differences%20in%20emotion%20perception%20between%20human-made%20textures%20and%20generated%20maps%20were%20assessed%20with%20an%20online%20survey%20completed%20by%201187%20users.%20The%20results%20confirmed%20that%20emotional%20descriptions%20remain%20the%20same%20before%20and%20after%20the%20procedure%20of%20neural%20style%20transfer.%20The%20users%20perceived%20artificially%20generated%20maps%20as%20interesting%20and%20visually%20pleasing%20artefacts.%20Artworks%20with%20variety%20of%20line%2C%20point%20and%20surface%20depictions%20were%20the%20most%20suitable%20algorithm%20inputs%20and%20achieved%20better%20visual%20results%20in%20representing%20the%20map%20content.%20After%20analysing%20the%20neural%20style%20transfer%20technique%20and%20identifying%20its%20limitations%20for%20cartographic%20style%20and%20map%20content%2C%20we%20conclude%20with%20plausible%20directions%20for%20future%20research.%3C%5C%2Fp%3E%22%2C%22date%22%3A%222019%5C%2F07%5C%2F10%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.5194%5C%2Fica-proc-2-9-2019%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.proc-int-cartogr-assoc.net%5C%2F2%5C%2F9%5C%2F2019%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A39%3A04Z%22%7D%7D%2C%7B%22key%22%3A%225DVP367W%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Kang%20et%20al.%22%2C%22parsedDate%22%3A%222019-05-04%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EKang%2C%20Y.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F23729333.2019.1615729%27%3ETransferring%20multiscale%20map%20styles%20using%20generative%20adversarial%20networks%3C%5C%2Fa%3E.%202019%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Transferring%20multiscale%20map%20styles%20using%20generative%20adversarial%20networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yuhao%22%2C%22lastName%22%3A%22Kang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Song%22%2C%22lastName%22%3A%22Gao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robert%20E.%22%2C%22lastName%22%3A%22Roth%22%7D%5D%2C%22abstractNote%22%3A%22The%20advancement%20of%20the%20Artificial%20Intelligence%20%28AI%29%20technologies%20makes%20it%20possible%20to%20learn%20stylistic%20design%20criteria%20from%20existing%20maps%20or%20other%20visual%20art%20and%20transfer%20these%20styles%20to%20make%20new%20digital%20maps.%20In%20this%20paper%2C%20we%20propose%20a%20novel%20framework%20using%20AI%20for%20map%20style%20transfer%20applicable%20across%20multiple%20map%20scales.%20Specifically%2C%20we%20identify%20and%20transfer%20the%20stylistic%20elements%20from%20a%20target%20group%20of%20visual%20examples%2C%20including%20Google%20Maps%2C%20OpenStreetMap%2C%20and%20artistic%20paintings%2C%20to%20unstylized%20GIS%20vector%20data%20through%20two%20generative%20adversarial%20network%20%28GAN%29%20models.%20We%20then%20train%20a%20binary%20classifier%20based%20on%20a%20deep%20convolutional%20neural%20network%20to%20evaluate%20whether%20the%20transfer%20styled%20map%20images%20preserve%20the%20original%20map%20design%20characteristics.%20Our%20experiment%20results%20show%20that%20GANs%20have%20great%20potential%20for%20multiscale%20map%20style%20transferring%2C%20but%20many%20challenges%20remain%20requiring%20future%20research.%22%2C%22date%22%3A%222019-05-04%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F23729333.2019.1615729%22%2C%22ISSN%22%3A%222372-9333%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F23729333.2019.1615729%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A04%3A15Z%22%7D%7D%2C%7B%22key%22%3A%22X5HDH45F%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Isola%20et%20al.%22%2C%22parsedDate%22%3A%222017%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EIsola%2C%20P.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.computer.org%5C%2Fcsdl%5C%2Fproceedings-article%5C%2Fcvpr%5C%2F2017%5C%2F0457f967%5C%2F12OmNx965Bx%27%3EImage-to-Image%20Translation%20with%20Conditional%20Adversarial%20Networks%3C%5C%2Fa%3E.%202017%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Image-to-Image%20Translation%20with%20Conditional%20Adversarial%20Networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Phillip%22%2C%22lastName%22%3A%22Isola%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jun-Yan%22%2C%22lastName%22%3A%22Zhu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tinghui%22%2C%22lastName%22%3A%22Zhou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Alexei%20A.%22%2C%22lastName%22%3A%22Efros%22%7D%5D%2C%22abstractNote%22%3A%22We%20investigate%20conditional%20adversarial%20networks%20as%20a%20general-purpose%20solution%20to%20image-to-image%20translation%20problems.%20These%20networks%20not%20only%20learn%20the%20mapping%20from%20input%20image%20to%20output%20image%2C%20but%20also%20learn%20a%20loss%20function%20to%20train%20this%20mapping.%20This%20makes%20it%20possible%20to%20apply%20the%20same%20generic%20approach%20to%20problems%20that%20traditionally%20would%20require%20very%20different%20loss%20formulations.%20We%20demonstrate%20that%20this%20approach%20is%20effective%20at%20synthesizing%20photos%20from%20label%20maps%2C%20reconstructing%20objects%20from%20edge%20maps%2C%20and%20colorizing%20images%2C%20among%20other%20tasks.%20Moreover%2C%20since%20the%20release%20of%20the%20pi%5Cu00d72pi%5Cu00d7%20software%20associated%20with%20this%20paper%2C%20hundreds%20of%20twitter%20users%20have%20posted%20their%20own%20artistic%20experiments%20using%20our%20system.%20As%20a%20community%2C%20we%20no%20longer%20hand-engineer%20our%20mapping%20functions%2C%20and%20this%20work%20suggests%20we%20can%20achieve%20reasonable%20results%20without%20handengineering%20our%20loss%20functions%20either.%22%2C%22date%22%3A%222017%22%2C%22proceedingsTitle%22%3A%222017%20IEEE%20Conference%20on%20Computer%20Vision%20and%20Pattern%20Recognition%20%28CVPR%29%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.1109%5C%2FCVPR.2017.632%22%2C%22ISBN%22%3A%22978-1-5386-0457-1%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.computer.org%5C%2Fcsdl%5C%2Fproceedings-article%5C%2Fcvpr%5C%2F2017%5C%2F0457f967%5C%2F12OmNx965Bx%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A01%3A52Z%22%7D%7D%5D%7D
Wu, A.N. et al. GANmapper: geographical data translation. 2022
Christophe, S. et al. Neural map style transfer exploration with GANs. 2022
Bogucka, E.P. et al. Projecting emotions from artworks to maps using neural style transfer. 2019
Kang, Y. et al. Transferring multiscale map styles using generative adversarial networks. 2019
Isola, P. et al. Image-to-Image Translation with Conditional Adversarial Networks. 2017
Generalization (Lines)
5447768
generalization, lines
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%228H5ADJ79%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yan%20and%20Yang%22%2C%22parsedDate%22%3A%222024-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EYan%2C%20X.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2218106%27%3EA%20deep%20learning%20approach%20for%20polyline%20and%20building%20simplification%20based%20on%20graph%20autoencoder%20with%20flexible%20constraints%3C%5C%2Fa%3E.%202024%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20deep%20learning%20approach%20for%20polyline%20and%20building%20simplification%20based%20on%20graph%20autoencoder%20with%20flexible%20constraints%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiongfeng%22%2C%22lastName%22%3A%22Yan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Yang%22%7D%5D%2C%22abstractNote%22%3A%22Polyline%20and%20building%20simplification%20remain%20challenging%20in%20cartography.%20Most%20proposed%20algorithms%20are%20geometric-based%20and%20rely%20on%20specific%20rules.%20In%20this%20study%2C%20we%20propose%20a%20deep%20learning%20approach%20to%20simplify%20polylines%20and%20buildings%20based%20on%20a%20graph%20autoencoder%20%28GAE%29.%20The%20model%20receives%20the%20coordinates%20of%20line%20vertices%20as%20inputs%20and%20obtains%20a%20simplified%20representation%20by%20reconstructing%20the%20original%20inputs%20with%20fewer%20vertices%20through%20pooling%2C%20in%20which%20the%20graph%20convolution%20based%20on%20graph%20Fourier%20transform%20is%20used%20for%20the%20layer-by-layer%20feature%20computation.%20By%20adjusting%20the%20loss%20functions%2C%20constraints%20such%20as%20area%20and%20shape%20preservation%20and%20angle-characteristic%20enhancement%20are%20flexibly%20configured%20under%20a%20unified%20learning%20framework.%20Our%20results%20confirmed%20the%20applicability%20of%20the%20GAE%20approach%20to%20the%20multi-scale%20simplification%20of%20land-cover%20boundaries%20and%20contours%20by%20adjusting%20the%20number%20of%20output%20nodes.%20Compared%20with%20existing%20Douglas%5Cu2012Peukcer%2C%20Fourier%20transform%2C%20and%20Delaunay%20triangulation%20approaches%2C%20the%20GAE%20approach%20was%20superior%20in%20achieving%20morphological%20abstraction%20while%20producing%20reasonably%20low%20position%2C%20area%2C%20and%20shape%20changes.%20Furthermore%2C%20we%20applied%20it%20to%20simplify%20buildings%20and%20demonstrated%20the%20potential%20for%20preserving%20the%20diversified%20characteristics%20of%20different%20types%20of%20lines.%22%2C%22date%22%3A%222024-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2023.2218106%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2218106%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-18T12%3A54%3A51Z%22%7D%7D%2C%7B%22key%22%3A%22GHDL43H8%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Karsznia%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EKarsznia%2C%20I.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2283075%27%3EUsing%20machine%20learning%20and%20data%20enrichment%20in%20the%20selection%20of%20roads%20for%20small-scale%20maps%3C%5C%2Fa%3E.%202024%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Using%20machine%20learning%20and%20data%20enrichment%20in%20the%20selection%20of%20roads%20for%20small-scale%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Izabela%22%2C%22lastName%22%3A%22Karsznia%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Albert%22%2C%22lastName%22%3A%22Adolf%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stefan%22%2C%22lastName%22%3A%22Leyk%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robert%22%2C%22lastName%22%3A%22Weibel%22%7D%5D%2C%22abstractNote%22%3A%22Making%20decisions%20about%20which%20objects%20to%20keep%20or%20omit%20is%20challenging%20in%20map%20design.%20This%20process%2C%20called%20selection%2C%20constitutes%20the%20first%20operation%20in%20cartographic%20generalization.%20In%20this%20research%2C%20a%20method%20of%20automatic%20road%20selection%20for%20creating%20small-scale%20maps%20using%20machine%20learning%20and%20data%20enrichment%20is%20proposed.%20First%2C%20the%20problem%20of%20contextual%20information%20scarcity%20concerning%20roads%20in%20the%20source%20database%20is%20addressed.%20Additional%20information%20concerning%20the%20relations%20between%20roads%20and%20other%20objects%20was%20added%20%28such%20as%20centrality%20and%20proximity%20measures%29.%20Second%2C%20machine%20learning%20is%20used%20to%20design%20automatic%20selection%20models%20based%20on%20enriched%20information.%20Third%2C%20three%20different%20road%20selection%20approaches%20are%20implemented.%20The%20baseline%20approach%20is%20following%20the%20official%20map%20design%20guidelines.%20The%20second%20approach%20is%20based%20on%20machine%20learning%20using%20the%20enriched%20road%20database.%20The%20third%20approach%20is%20based%20on%20an%20existing%20structural%20model.%20The%20results%20of%20all%20approaches%20are%20compared%20to%20existing%20atlas%20maps%20designed%20by%20experienced%20cartographers.%20The%20results%20of%20the%20Machine%20Learning%20Approaches%20were%20most%20similar%20to%20the%20atlas%20maps%20%28between%2081%25%20and%2090%25%20accuracy%29.%20The%20least%20efficient%20approaches%20were%20the%20Structural%20Approach%20with%2032%25%20and%20the%20Guidelines%20Approach%20with%2044%25%20accuracy.%20We%20conclude%20that%20enriching%20road%20data%20with%20new%20contextual%20information%20concerning%20roads%20and%20using%20machine%20learning%20is%20beneficial%20as%20the%20achieved%20results%20outperform%20both%20Guidelines%20and%20Structural%20Approaches.%22%2C%22date%22%3A%222024-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2023.2283075%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2283075%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-18T12%3A53%3A59Z%22%7D%7D%2C%7B%22key%22%3A%22ES4Y6VQD%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Courtial%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ECourtial%2C%20A.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2267419%27%3EDeepMapScaler%3A%20a%20workflow%20of%20deep%20neural%20networks%20for%20the%20generation%20of%20generalised%20maps%3C%5C%2Fa%3E.%202024%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22DeepMapScaler%3A%20a%20workflow%20of%20deep%20neural%20networks%20for%20the%20generation%20of%20generalised%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Azelle%22%2C%22lastName%22%3A%22Courtial%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guillaume%22%2C%22lastName%22%3A%22Touya%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiang%22%2C%22lastName%22%3A%22Zhang%22%7D%5D%2C%22abstractNote%22%3A%22The%20automation%20of%20map%20generalization%20has%20been%20an%20important%20research%20subject%20for%20decades%20but%20is%20not%20fully%20solved%20yet.%20Deep%20learning%20techniques%20are%20designed%20for%20various%20image%20generation%20tasks%2C%20so%20one%20may%20think%20that%20it%20would%20be%20possible%20to%20apply%20these%20techniques%20to%20cartography%20and%20train%20a%20holistic%20model%20for%20end-to-end%20map%20generalization.%20On%20the%20contrary%2C%20we%20assume%20that%20map%20generalization%20is%20a%20task%20too%20complex%20to%20be%20learnt%20with%20a%20unique%20model.%20Thus%2C%20in%20this%20article%2C%20we%20propose%20to%20resort%20to%20past%20research%20on%20map%20generalization%20and%20to%20separate%20map%20generalization%20into%20simpler%20sub-tasks%2C%20each%20of%20which%20can%20be%20more%20easily%20resolved%20by%20a%20deep%20neural%20network.%20Our%20main%20contribution%20is%20a%20workflow%20of%20deep%20models%2C%20called%20DeepMapScaler%2C%20which%20achieves%20a%20step-by-step%20topographic%20map%20generalization%20from%20detailed%20topographic%20data.%20First%2C%20we%20implement%20this%20workflow%20to%20generalize%20topographic%20maps%20containing%20roads%2C%20buildings%2C%20and%20rivers%20at%20a%20medium%20scale%20%281%3A50k%29%20from%20a%20detailed%20dataset.%20The%20results%20of%20each%20step%20are%20quantitatively%20and%20visually%20evaluated.%20Then%20the%20generalized%20images%20are%20compared%20with%20the%20generalization%20performed%20using%20a%20holistic%20model%20for%20an%20end-to-end%20map%20generalization%20and%20a%20traditional%20semi-automatic%20map%20generalization%20process.%20The%20experiment%20shows%20that%20the%20workflow%20approach%20is%20more%20promising%20than%20the%20holistic%20model%2C%20as%20each%20sub-task%20is%20specialized%20and%20fine-tuned%20accordingly.%20However%2C%20the%20results%20still%20do%20not%20reach%20the%20quality%20level%20of%20the%20semi-automatic%20traditional%20map%20generalization%20process%2C%20as%20some%20sub-tasks%20are%20more%20complex%20to%20handle%20with%20neural%20networks.%22%2C%22date%22%3A%222024-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2023.2267419%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2267419%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-18T12%3A53%3A12Z%22%7D%7D%2C%7B%22key%22%3A%22XNCHT48J%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Du%20et%20al.%22%2C%22parsedDate%22%3A%222022-07-18%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EDu%2C%20J.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10106049.2021.1878288%27%3ESegmentation%20and%20sampling%20method%20for%20complex%20polyline%20generalization%20based%20on%20a%20generative%20adversarial%20network%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Segmentation%20and%20sampling%20method%20for%20complex%20polyline%20generalization%20based%20on%20a%20generative%20adversarial%20network%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiawei%22%2C%22lastName%22%3A%22Du%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fang%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ruixing%22%2C%22lastName%22%3A%22Xing%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xianyong%22%2C%22lastName%22%3A%22Gong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Linyi%22%2C%22lastName%22%3A%22Yu%22%7D%5D%2C%22abstractNote%22%3A%22This%20paper%20focuses%20on%20learning%20complex%20polyline%20generalization.%20First%2C%20the%20requirements%20for%20sampled%20images%20to%20ensure%20the%20effective%20learning%20of%20complex%20polyline%20generalization%20are%20analysed.%20To%20meet%20these%20requirements%2C%20new%20methods%20for%20segmenting%20complex%20polylines%20and%20sampling%20images%20are%20proposed.%20Second%2C%20using%20the%20proposed%20segmentation%20and%20sampling%20method%2C%20a%20use%20case%20for%20the%20learning%20of%20complex%20polyline%20generalization%20using%20the%20generative%20adversarial%20network%20model%2C%20Pix2Pix%2C%20is%20developed.%20Third%2C%20this%20use%20case%20is%20applied%20experimentally%20for%20the%20complex%20generalization%20of%20coastline%20data%20from%20a%20scale%20of%201%3A50%2C000%20to%201%3A250%2C000.%20Additionally%2C%20contrast%20experiments%20are%20conducted%20to%20compare%20the%20proposed%20segmentation%20and%20sampling%20method%20with%20object-based%20and%20traditional%20fixed-size%20methods.%20Experimental%20results%20show%20that%20the%20images%20generated%20using%20the%20proposed%20method%20are%20superior%20to%20the%20other%20two%20methods%20in%20the%20learning%20and%20application%20of%20complex%20polyline%20generalization.%20The%20results%20generalized%20for%20the%20developed%20use%20case%20are%20globally%20reasonable%20and%20suitably%20accurate.%22%2C%22date%22%3A%222022-07-18%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F10106049.2021.1878288%22%2C%22ISSN%22%3A%221010-6049%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10106049.2021.1878288%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A49%3A41Z%22%7D%7D%2C%7B%22key%22%3A%22BAWBLWXJ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Courtial%20et%20al.%22%2C%22parsedDate%22%3A%222022-06-10%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ECourtial%2C%20A.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fagile-giss.copernicus.org%5C%2Farticles%5C%2F3%5C%2F32%5C%2F2022%5C%2F%27%3ERepresenting%20Vector%20Geographic%20Information%20As%20a%20Tensor%20for%20Deep%20Learning%20Based%20Map%20Generalisation%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Representing%20Vector%20Geographic%20Information%20As%20a%20Tensor%20for%20Deep%20Learning%20Based%20Map%20Generalisation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Azelle%22%2C%22lastName%22%3A%22Courtial%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guillaume%22%2C%22lastName%22%3A%22Touya%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiang%22%2C%22lastName%22%3A%22Zhang%22%7D%5D%2C%22abstractNote%22%3A%22Recently%2C%20many%20researchers%20tried%20to%20generate%20%28generalised%29%20maps%20using%20deep%20learning%2C%20and%20most%20of%20the%20proposed%20methods%20deal%20with%20deep%20neural%20network%20architecture%20choices.%20Deep%20learning%20learns%20to%20reproduce%20examples%2C%20so%20we%20think%20that%20improving%20the%20training%20examples%2C%20and%20especially%20the%20representation%20of%20the%20initial%20geographic%20information%2C%20is%20the%20key%20issue%20for%20this%20problem.%20Our%20article%20extracts%20some%20representation%20issues%20from%20a%20literature%20review%20and%20proposes%20different%20ways%20to%20represent%20vector%20geographic%20information%20as%20a%20tensor.We%20propose%20two%20kinds%20of%20contributions%3A%201%29%20the%20representation%20of%20information%20by%20layers%3B%202%29%20the%20representation%20of%20additional%20information.%20Then%2C%20we%20demonstrate%20the%20interest%20of%20some%20of%20our%20propositions%20with%20experiments%20that%20show%20a%20visual%20improvement%20for%20the%20generation%20of%20generalised%20topographic%20maps%20in%20urban%20areas.%22%2C%22date%22%3A%222022%5C%2F06%5C%2F10%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.5194%5C%2Fagile-giss-3-32-2022%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fagile-giss.copernicus.org%5C%2Farticles%5C%2F3%5C%2F32%5C%2F2022%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A48%3A06Z%22%7D%7D%2C%7B%22key%22%3A%223ZFU7TII%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Du%20and%20Wu%22%2C%22parsedDate%22%3A%222022-03-30%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EDu%20J.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27http%3A%5C%2F%5C%2Fxb.chinasmp.com%5C%2FCN%5C%2F10.11947%5C%2Fj.AGCS.2022.20210135%27%3EAn%20ensemble%20learning%20simplification%20approach%20based%20on%20multiple%20machine-learning%20algorithms%20with%20the%20fusion%20using%20of%20raster%20and%20vector%20data%20and%20a%20use%20case%20of%20coastline%20simplification%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22An%20ensemble%20learning%20simplification%20approach%20based%20on%20multiple%20machine-learning%20algorithms%20with%20the%20fusion%20using%20of%20raster%20and%20vector%20data%20and%20a%20use%20case%20of%20coastline%20simplification%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiawei%22%2C%22lastName%22%3A%22Du%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fang%22%2C%22lastName%22%3A%22Wu%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222022-03-30%22%2C%22language%22%3A%22zh%22%2C%22DOI%22%3A%2210.11947%5C%2Fj.AGCS.2022.20210135%22%2C%22ISSN%22%3A%221001-1595%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fxb.chinasmp.com%5C%2FCN%5C%2F10.11947%5C%2Fj.AGCS.2022.20210135%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A02%3A44Z%22%7D%7D%2C%7B%22key%22%3A%228FF8ZEBQ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yu%20and%20Chen%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EYu%2C%20W.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1111%5C%2Ftgis.12965%27%3EData-driven%20polyline%20simplification%20using%20a%20stacked%20autoencoder-based%20deep%20neural%20network%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Data-driven%20polyline%20simplification%20using%20a%20stacked%20autoencoder-based%20deep%20neural%20network%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenhao%22%2C%22lastName%22%3A%22Yu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yujie%22%2C%22lastName%22%3A%22Chen%22%7D%5D%2C%22abstractNote%22%3A%22Automatic%20simplification%20of%20polylines%20is%20an%20important%20issue%20in%20spatial%20database%20and%20mapping.%20Traditional%20rule-based%20methods%20are%20usually%20limited%20in%20performance%2C%20especially%20when%20the%20man-made%20rules%20have%20to%20be%20adapted%20to%20different%20polylines%20with%20different%20shapes%20and%20structures.%20Compared%20to%20the%20existing%20neural%20network%20methods%20focusing%20only%20on%20the%20output%20layer%20or%20the%20code%20layers%20for%20classification%20or%20regression%2C%20our%20proposed%20method%20generates%20multi-level%20abstractions%20of%20polylines%20by%20extracting%20features%20from%20multiple%20hidden%20layers.%20Specifically%2C%20we%20first%20organize%20the%20cartographic%20polylines%20into%20the%20form%20of%20feature%20vectors%20acceptable%20to%20the%20neural%20network%20model.%20Then%2C%20a%20stacked%20autoencoder-based%20deep%20neural%20network%20model%20is%20trained%20to%20learn%20the%20pattern%20features%20of%20polyline%20bends%20and%20omit%20unimportant%20details%20layer%20by%20layer.%20Finally%2C%20the%20multi-level%20abstractions%20of%20input%20polylines%20are%20generated%20from%20different%20hidden%20layers%20of%20a%20single%20model.%20The%20experimental%20results%20demonstrate%20that%2C%20compared%20with%20the%20classic%20Douglas%5Cu2013Peucker%20and%20Wang%20and%20Muller%20algorithms%2C%20the%20proposed%20method%20is%20able%20to%20properly%20simplify%20the%20polylines%20while%20representing%20their%20essential%20shapes%20smoothly%20and%20reducing%20areal%20displacement.%22%2C%22date%22%3A%222022%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1111%5C%2Ftgis.12965%22%2C%22ISSN%22%3A%221467-9671%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1111%5C%2Ftgis.12965%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A09%3A18Z%22%7D%7D%2C%7B%22key%22%3A%22HGZFPWUY%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zheng%20et%20al.%22%2C%22parsedDate%22%3A%222021-11%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EZheng%2C%20J.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F10%5C%2F11%5C%2F768%27%3EDeep%20Graph%20Convolutional%20Networks%20for%20Accurate%20Automatic%20Road%20Network%20Selection%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep%20Graph%20Convolutional%20Networks%20for%20Accurate%20Automatic%20Road%20Network%20Selection%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jing%22%2C%22lastName%22%3A%22Zheng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ziren%22%2C%22lastName%22%3A%22Gao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jingsong%22%2C%22lastName%22%3A%22Ma%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jie%22%2C%22lastName%22%3A%22Shen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kang%22%2C%22lastName%22%3A%22Zhang%22%7D%5D%2C%22abstractNote%22%3A%22The%20selection%20of%20road%20networks%20is%20very%20important%20for%20cartographic%20generalization.%20Traditional%20artificial%20intelligence%20methods%20have%20improved%20selection%20efficiency%20but%20cannot%20fully%20extract%20the%20spatial%20features%20of%20road%20networks.%20However%2C%20current%20selection%20methods%2C%20which%20are%20based%20on%20the%20theory%20of%20graphs%20or%20strokes%2C%20have%20low%20automaticity%20and%20are%20highly%20subjective.%20Graph%20convolutional%20networks%20%28GCNs%29%20combine%20graph%20theory%20with%20neural%20networks%3B%20thus%2C%20they%20can%20not%20only%20extract%20spatial%20information%20but%20also%20realize%20automatic%20selection.%20Therefore%2C%20in%20this%20study%2C%20we%20adopted%20GCNs%20for%20automatic%20road%20network%20selection%20and%20transformed%20the%20process%20into%20one%20of%20node%20classification.%20In%20addition%2C%20to%20solve%20the%20problem%20of%20gradient%20vanishing%20in%20GCNs%2C%20we%20compared%20and%20analyzed%20the%20results%20of%20various%20GCNs%20%28GraphSAGE%20and%20graph%20attention%20networks%20%5BGAT%5D%29%20by%20selecting%20small-scale%20road%20networks%20under%20different%20deep%20architectures%20%28JK-Nets%2C%20ResNet%2C%20and%20DenseNet%29.%20Our%20results%20indicate%20that%20GAT%20provides%20better%20selection%20of%20road%20networks%20than%20other%20models.%20Additionally%2C%20the%20three%20abovementioned%20deep%20architectures%20can%20effectively%20improve%20the%20selection%20effect%20of%20models%3B%20JK-Nets%20demonstrated%20more%20improvement%20with%20higher%20accuracy%20%2888.12%25%29%20than%20other%20methods.%20Thus%2C%20our%20study%20shows%20that%20GCN%20is%20an%20appropriate%20tool%20for%20road%20network%20selection%3B%20its%20application%20in%20cartography%20must%20be%20further%20explored.%22%2C%22date%22%3A%222021%5C%2F11%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi10110768%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F10%5C%2F11%5C%2F768%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A56%3A16Z%22%7D%7D%2C%7B%22key%22%3A%228XFZHMEL%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Courtial%20et%20al.%22%2C%22parsedDate%22%3A%222021-06-30%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ECourtial%2C%20A.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.int-arch-photogramm-remote-sens-spatial-inf-sci.net%5C%2FXLIII-B4-2021%5C%2F15%5C%2F2021%5C%2F%27%3EGenerative%20adversarial%20networks%20to%20generalise%20urban%20areas%20in%20topographic%20maps%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Generative%20adversarial%20networks%20to%20generalise%20urban%20areas%20in%20topographic%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22A.%22%2C%22lastName%22%3A%22Courtial%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22G.%22%2C%22lastName%22%3A%22Touya%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22X.%22%2C%22lastName%22%3A%22Zhang%22%7D%5D%2C%22abstractNote%22%3A%22%3Cp%3E%3Cstrong%20class%3D%5C%22journal-contentHeaderColor%5C%22%3EAbstract.%3C%5C%2Fstrong%3E%20This%20article%20presents%20how%20a%20generative%20adversarial%20network%20%28GAN%29%20can%20be%20employed%20to%20produce%20a%20generalised%20map%20that%20combines%20several%20cartographic%20themes%20in%20the%20dense%20context%20of%20urban%20areas.%20We%20use%20as%20input%20detailed%20buildings%2C%20roads%2C%20and%20rivers%20from%20topographic%20datasets%20produced%20by%20the%20French%20national%20mapping%20agency%20%28IGN%29%2C%20and%20we%20expect%20as%20output%20of%20the%20GAN%20a%20legible%20map%20of%20these%20elements%20at%20a%20target%20scale%20of%201%3A50%2C000.%20This%20level%20of%20detail%20requires%20to%20reduce%20the%20amount%20of%20information%20while%20preserving%20patterns%3B%20covering%20dense%20inner%20cities%20block%20by%20a%20unique%20polygon%20is%20also%20necessary%20because%20these%20blocks%20cannot%20be%20represented%20with%20enlarged%20individual%20buildings.%20The%20target%20map%20has%20a%20style%20similar%20to%20the%20topographic%20map%20produced%20by%20IGN.%20This%20experiment%20succeeded%20in%20producing%20image%20tiles%20that%20look%20like%20legible%20maps.%20It%20also%20highlights%20the%20impact%20of%20data%20and%20representation%20choices%20on%20the%20quality%20of%20predicted%20images%2C%20and%20the%20challenge%20of%20learning%20geographic%20relationships.%3C%5C%2Fp%3E%22%2C%22date%22%3A%222021%5C%2F06%5C%2F30%22%2C%22proceedingsTitle%22%3A%22The%20International%20Archives%20of%20the%20Photogrammetry%2C%20Remote%20Sensing%20and%20Spatial%20Information%20Sciences%22%2C%22conferenceName%22%3A%22XXIV%20ISPRS%20Congress%20%3Cq%3EImaging%20today%2C%20foreseeing%20tomorrow%3C%5C%2Fq%3E%2C%20Commission%20IV%20-%202021%20edition%2C%205%26ndash%3B9%20July%202021%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.5194%5C%2Fisprs-archives-XLIII-B4-2021-15-2021%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.int-arch-photogramm-remote-sens-spatial-inf-sci.net%5C%2FXLIII-B4-2021%5C%2F15%5C%2F2021%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A01%3A48Z%22%7D%7D%2C%7B%22key%22%3A%22CFM27Y4P%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Courtial%20et%20al.%22%2C%22parsedDate%22%3A%222020-05%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ECourtial%2C%20A.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F9%5C%2F5%5C%2F338%27%3EExploring%20the%20Potential%20of%20Deep%20Learning%20Segmentation%20for%20Mountain%20Roads%20Generalisation%3C%5C%2Fa%3E.%202020%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Exploring%20the%20Potential%20of%20Deep%20Learning%20Segmentation%20for%20Mountain%20Roads%20Generalisation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Azelle%22%2C%22lastName%22%3A%22Courtial%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Achraf%22%2C%22lastName%22%3A%22El%20Ayedi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guillaume%22%2C%22lastName%22%3A%22Touya%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiang%22%2C%22lastName%22%3A%22Zhang%22%7D%5D%2C%22abstractNote%22%3A%22Among%20cartographic%20generalisation%20problems%2C%20the%20generalisation%20of%20sinuous%20bends%20in%20mountain%20roads%20has%20always%20been%20a%20popular%20one%20due%20to%20its%20difficulty.%20Recent%20research%20showed%20the%20potential%20of%20deep%20learning%20techniques%20to%20overcome%20some%20remaining%20research%20problems%20regarding%20the%20automation%20of%20cartographic%20generalisation.%20This%20paper%20explores%20this%20potential%20on%20the%20popular%20mountain%20road%20generalisation%20problem%2C%20which%20requires%20smoothing%20the%20road%2C%20enlarging%20the%20bend%20summits%2C%20and%20schematising%20the%20bend%20series%20by%20removing%20some%20of%20the%20bends.%20We%20modelled%20the%20mountain%20road%20generalisation%20as%20a%20deep%20learning%20problem%20by%20generating%20an%20image%20from%20input%20vector%20road%20data%2C%20and%20tried%20to%20generate%20it%20as%20an%20output%20of%20the%20model%20a%20new%20image%20of%20the%20generalised%20roads.%20Similarly%20to%20previous%20studies%20on%20building%20generalisation%2C%20we%20used%20a%20U-Net%20architecture%20to%20generate%20the%20generalised%20image%20from%20the%20ungeneralised%20image.%20The%20deep%20learning%20model%20was%20trained%20and%20evaluated%20on%20a%20dataset%20composed%20of%20roads%20in%20the%20Alps%20extracted%20from%20IGN%20%28the%20French%20national%20mapping%20agency%29%20maps%20at%201%3A250%2C000%20%28output%29%20and%201%3A25%2C000%20%28input%29%20scale.%20The%20results%20are%20encouraging%20as%20the%20output%20image%20looks%20like%20a%20generalised%20version%20of%20the%20roads%20and%20the%20accuracy%20of%20pixel%20segmentation%20is%20around%2065%25.%20The%20model%20learns%20how%20to%20smooth%20the%20output%20roads%2C%20and%20that%20it%20needs%20to%20displace%20and%20enlarge%20symbols%20but%20does%20not%20always%20correctly%20achieve%20these%20operations.%20This%20article%20shows%20the%20ability%20of%20deep%20learning%20to%20understand%20and%20manage%20the%20geographic%20information%20for%20generalisation%2C%20but%20also%20highlights%20challenges%20to%20come.%22%2C%22date%22%3A%222020%5C%2F5%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi9050338%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F9%5C%2F5%5C%2F338%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A02%3A26Z%22%7D%7D%5D%7D
Karsznia, I. et al. Using machine learning and data enrichment in the selection of roads for small-scale maps. 2024
Courtial, A. et al. DeepMapScaler: a workflow of deep neural networks for the generation of generalised maps. 2024
Courtial, A. et al. Representing Vector Geographic Information As a Tensor for Deep Learning Based Map Generalisation. 2022
Zheng, J. et al. Deep Graph Convolutional Networks for Accurate Automatic Road Network Selection. 2021
Courtial, A. et al. Generative adversarial networks to generalise urban areas in topographic maps. 2021
Courtial, A. et al. Exploring the Potential of Deep Learning Segmentation for Mountain Roads Generalisation. 2020
Generalization (Polygons)
5447768
generalization, polygons
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%229UR2L4ZE%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhou%20et%20al.%22%2C%22parsedDate%22%3A%222024-12-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EZhou%2C%20Z.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1569843224005922%27%3ESpaGAN%3A%20A%20spatially-aware%20generative%20adversarial%20network%20for%20building%20generalization%20in%20image%20maps%3C%5C%2Fa%3E.%202024%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22SpaGAN%3A%20A%20spatially-aware%20generative%20adversarial%20network%20for%20building%20generalization%20in%20image%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhiyong%22%2C%22lastName%22%3A%22Zhou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cheng%22%2C%22lastName%22%3A%22Fu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robert%22%2C%22lastName%22%3A%22Weibel%22%7D%5D%2C%22abstractNote%22%3A%22Building%20generalization%20is%20an%20essential%20task%20in%20generating%20multi-scale%20topographic%20maps.%20The%20progress%20of%20deep%20learning%20offers%20a%20new%20paradigm%20to%20overcome%20the%20coordination%20challenges%20faced%20by%20conventional%20building%20generalization%20algorithms.%20Some%20studies%20have%20confirmed%20the%20feasibility%20of%20several%20original%20semantic%20segmentation%20networks%2C%20such%20as%20U-Net%20and%20its%20variants%20and%20the%20conditional%20generative%20adversarial%20network%20%28cGAN%29%2C%20for%20building%20generalization%20in%20image%20maps.%20However%2C%20they%20suffer%20from%20critical%20deformation%20effects%2C%20especially%20for%20large%20and%20geometrically%20complex%20buildings.%20Since%20learning%20building%20generalization%20essentially%20means%20modeling%20the%20subtle%20transformation%20of%20building%20footprints%20across%20scales%2C%20we%20argue%20that%20the%20spatial%20awareness%20of%20a%20neural%20network%2C%20for%20instance%2C%20regarding%20building%20size%20and%20shape%2C%20is%20crucial%20to%20effective%20learning.%20Thus%2C%20we%20propose%20a%20spatially-aware%20generative%20adversarial%20network%2C%20SpaGAN.%20It%20takes%20a%20representative%20cGAN%2C%20pix2pix%2C%20as%20the%20backbone%2C%20and%20modifies%20two%20modules%3A%20In%20the%20U-Net-based%20generator%2C%20an%20atrous%20spatial%20pyramid%20pooling%20%28ASPP%29%20module%20replaces%20the%20conventional%20convolutional%20module%20to%20extract%20multi-scale%20features%20of%20buildings%20of%20varying%20sizes%20and%20shapes%3B%20in%20the%20PatchGAN-based%20discriminator%2C%20a%20signed%20distance%20map%20%28SDM%29%20module%20is%20used%20to%20capture%20the%20fine-grained%20shape%20difference%20for%20discrimination.%20The%20proposed%20network%20was%20comprehensively%20evaluated%20with%20a%20synthetic%20and%20a%20real-world%20dataset.%20The%20results%20demonstrate%20that%20SpaGAN%20outperforms%20existing%20baseline%20models%20%28U-Net%2C%20ResU-Net%2C%20pix2pix%29%20for%20building%20generalization%2C%20particularly%20in%20the%20real-world%20dataset.%20The%20new%20model%20can%20achieve%20more%20reasonable%20aggregation%2C%20simplification%2C%20and%20squaring%20generalization%20operators.%22%2C%22date%22%3A%222024-12-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.jag.2024.104236%22%2C%22ISSN%22%3A%221569-8432%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1569843224005922%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-12-12T16%3A34%3A13Z%22%7D%7D%2C%7B%22key%22%3A%22E34TFMUY%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Knura%22%2C%22parsedDate%22%3A%222024-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EKnura%2C%20M.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2273397%27%3ELearning%20from%20vector%20data%3A%20enhancing%20vector-based%20shape%20encoding%20and%20shape%20classification%20for%20map%20generalization%20purposes%3C%5C%2Fa%3E.%202024%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Learning%20from%20vector%20data%3A%20enhancing%20vector-based%20shape%20encoding%20and%20shape%20classification%20for%20map%20generalization%20purposes%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Martin%22%2C%22lastName%22%3A%22Knura%22%7D%5D%2C%22abstractNote%22%3A%22Map%20generalization%20is%20a%20complex%20task%20that%20requires%20a%20high%20level%20of%20spatial%20cognition%2C%20and%20deep%20learning%20techniques%20have%20shown%20in%20numerous%20research%20fields%20that%20they%20could%20match%20or%20even%20outplay%20human%20cognition%20when%20knowledge%20is%20implicitly%20in%20the%20data.%20First%20experiments%20that%20apply%20deep%20learning%20techniques%20to%20map%20generalization%20tasks%20thereby%20adapt%20models%20from%20image%20processing%2C%20creating%20input%20data%20by%20rasterizing%20spatial%20vector%20data.%20Because%20image-based%20learning%20has%20major%20shortcomings%20for%20map%20generalization%2C%20this%20article%20investigates%20possibilities%20to%20learn%20directly%20from%20vector%20data%2C%20utilizing%20vector-based%20encoding%20schemes.%20First%2C%20we%20enhance%20preprocessing%20methods%20to%20match%20essential%20properties%20of%20deep%20learning%20models%20%5Cu2013%20namely%20regularity%20and%20feature%20description%20%5Cu2013%20and%20evaluate%20the%20performance%20of%20Convolutional%20Neural%20Networks%20%28CNN%29%2C%20Recurrent%20Neural%20Networks%20%28RNN%29%2C%20and%20Graph%20Convolutional%20Neural%20Networks%20%28GCNN%29%20in%20combination%20with%20a%20feature-based%20encoding%20scheme.%20The%20results%20show%20that%20feature%20descriptors%20improve%20the%20accuracy%20of%20all%20three%20neural%20networks%2C%20and%20that%20the%20overall%20performances%20of%20the%20models%20are%20quite%20similar%20for%20both%20polygon%20and%20polyline%20shape%20classification%20tasks.%20In%20a%20second%20step%2C%20we%20implement%20an%20exemplary%20building%20generalization%20workflow%20based%20on%20shape%20classification%20and%20template%20matching%2C%20and%20discuss%20the%20generalization%20results%20based%20on%20a%20case%20study.%22%2C%22date%22%3A%222024-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2023.2273397%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2273397%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-18T17%3A37%3A51Z%22%7D%7D%2C%7B%22key%22%3A%225Q3C888Q%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Fu%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EFu%2C%20C.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2264757%27%3EKeeping%20walls%20straight%3A%20data%20model%20and%20training%20set%20size%20matter%20for%20deep%20learning%20in%20building%20generalization%3C%5C%2Fa%3E.%202024%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Keeping%20walls%20straight%3A%20data%20model%20and%20training%20set%20size%20matter%20for%20deep%20learning%20in%20building%20generalization%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cheng%22%2C%22lastName%22%3A%22Fu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhiyong%22%2C%22lastName%22%3A%22Zhou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yu%22%2C%22lastName%22%3A%22Feng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robert%22%2C%22lastName%22%3A%22Weibel%22%7D%5D%2C%22abstractNote%22%3A%22Deep%20learning-backed%20models%20have%20shown%20their%20potential%20of%20conducting%20map%20generalization%20tasks.%20However%2C%20pioneering%20studies%20for%20raster-based%20building%20generalization%20encountered%20a%20common%20%5Cu201cwabbly-wall%20effect%5Cu201d%20that%20makes%20the%20predicted%20building%20shapes%20unrealistic.%20This%20effect%20was%20identified%20as%20a%20critical%20challenge%20in%20the%20existing%20studies.%20This%20work%20proposes%20a%20layered%20data%20representation%20model%20that%20separately%20stores%20a%20building%20for%20generalization%20and%20its%20context%20buildings%20in%20different%20channels.%20Incorporating%20adjustments%20to%20training%20sample%20generation%20and%20prediction%20tasks%2C%20we%20show%20how%20even%20without%20using%20more%20complex%20deep%20learning%20architectures%2C%20the%20widely%20used%20Residual%20U-Net%20can%20already%20produce%20straight%20walls%20for%20the%20generalized%20buildings%20and%20maintains%20rectangularity%20and%20parallelism%20of%20the%20buildings%20very%20well%20for%20building%20simplification%20and%20aggregation%20in%20the%20scale%20transition%20from%201%3A5%2C000%20to%201%3A10%2C000%20and%201%3A5%2C000%20to%201%3A15%2C000%2C%20respectively.%20Experiments%20with%20visual%20evaluation%20and%20quantitative%20indicators%20such%20as%20Intersection%20over%20Union%20%28IoU%29%2C%20fractality%2C%20and%20roughness%20index%20show%20that%20using%20a%20larger%20input%20tensor%20size%20is%20an%20easy%20but%20effective%20solution%20to%20improve%20prediction.%20Balancing%20samples%20with%20data%20augmentation%20and%20introducing%20an%20attention%20mechanism%20to%20increase%20network%20learning%20capacity%20can%20help%20in%20certain%20experiment%20settings%20but%20have%20obvious%20tradeoffs.%20In%20addition%2C%20we%20find%20that%20the%20defects%20observed%20in%20previous%20studies%20may%20be%20due%20to%20a%20lack%20of%20enough%20training%20samples.%20We%20thus%20conclude%20that%20the%20wabbly-wall%20challenge%20can%20be%20solved%2C%20paving%20the%20way%20for%20further%20studies%20of%20applying%20raster-based%20deep%20learning%20models%20on%20map%20generalization.%20Demonstrates%20the%20effectiveness%20of%20the%20proposed%20data%20structure%20with%20multiple%20evaluation%20indicatorsIdentifies%20a%20%5Cu201cwabbly-wall%20effect%5Cu201d%20a%20challenge%20in%20deep-learning%20backed%20image%20based%20map%20generalizationProposes%20a%20layered%20data%20structure%20that%20separates%20a%20target%20building%20and%20its%20surrounding%20buildings%20to%20ease%20the%20learning%20task%20in%20training%20deep%20learning%20models%20for%20raster-based%20map%20generalization.%20Demonstrates%20the%20effectiveness%20of%20the%20proposed%20data%20structure%20with%20multiple%20evaluation%20indicators%20Identifies%20a%20%5Cu201cwabbly-wall%20effect%5Cu201d%20a%20challenge%20in%20deep-learning%20backed%20image%20based%20map%20generalization%20Proposes%20a%20layered%20data%20structure%20that%20separates%20a%20target%20building%20and%20its%20surrounding%20buildings%20to%20ease%20the%20learning%20task%20in%20training%20deep%20learning%20models%20for%20raster-based%20map%20generalization.%22%2C%22date%22%3A%222024-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2023.2264757%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2264757%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-18T17%3A37%3A45Z%22%7D%7D%2C%7B%22key%22%3A%228H5ADJ79%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yan%20and%20Yang%22%2C%22parsedDate%22%3A%222024-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EYan%2C%20X.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2218106%27%3EA%20deep%20learning%20approach%20for%20polyline%20and%20building%20simplification%20based%20on%20graph%20autoencoder%20with%20flexible%20constraints%3C%5C%2Fa%3E.%202024%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20deep%20learning%20approach%20for%20polyline%20and%20building%20simplification%20based%20on%20graph%20autoencoder%20with%20flexible%20constraints%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiongfeng%22%2C%22lastName%22%3A%22Yan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Yang%22%7D%5D%2C%22abstractNote%22%3A%22Polyline%20and%20building%20simplification%20remain%20challenging%20in%20cartography.%20Most%20proposed%20algorithms%20are%20geometric-based%20and%20rely%20on%20specific%20rules.%20In%20this%20study%2C%20we%20propose%20a%20deep%20learning%20approach%20to%20simplify%20polylines%20and%20buildings%20based%20on%20a%20graph%20autoencoder%20%28GAE%29.%20The%20model%20receives%20the%20coordinates%20of%20line%20vertices%20as%20inputs%20and%20obtains%20a%20simplified%20representation%20by%20reconstructing%20the%20original%20inputs%20with%20fewer%20vertices%20through%20pooling%2C%20in%20which%20the%20graph%20convolution%20based%20on%20graph%20Fourier%20transform%20is%20used%20for%20the%20layer-by-layer%20feature%20computation.%20By%20adjusting%20the%20loss%20functions%2C%20constraints%20such%20as%20area%20and%20shape%20preservation%20and%20angle-characteristic%20enhancement%20are%20flexibly%20configured%20under%20a%20unified%20learning%20framework.%20Our%20results%20confirmed%20the%20applicability%20of%20the%20GAE%20approach%20to%20the%20multi-scale%20simplification%20of%20land-cover%20boundaries%20and%20contours%20by%20adjusting%20the%20number%20of%20output%20nodes.%20Compared%20with%20existing%20Douglas%5Cu2012Peukcer%2C%20Fourier%20transform%2C%20and%20Delaunay%20triangulation%20approaches%2C%20the%20GAE%20approach%20was%20superior%20in%20achieving%20morphological%20abstraction%20while%20producing%20reasonably%20low%20position%2C%20area%2C%20and%20shape%20changes.%20Furthermore%2C%20we%20applied%20it%20to%20simplify%20buildings%20and%20demonstrated%20the%20potential%20for%20preserving%20the%20diversified%20characteristics%20of%20different%20types%20of%20lines.%22%2C%22date%22%3A%222024-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2023.2218106%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2218106%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-18T12%3A54%3A51Z%22%7D%7D%2C%7B%22key%22%3A%22ES4Y6VQD%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Courtial%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ECourtial%2C%20A.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2267419%27%3EDeepMapScaler%3A%20a%20workflow%20of%20deep%20neural%20networks%20for%20the%20generation%20of%20generalised%20maps%3C%5C%2Fa%3E.%202024%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22DeepMapScaler%3A%20a%20workflow%20of%20deep%20neural%20networks%20for%20the%20generation%20of%20generalised%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Azelle%22%2C%22lastName%22%3A%22Courtial%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guillaume%22%2C%22lastName%22%3A%22Touya%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiang%22%2C%22lastName%22%3A%22Zhang%22%7D%5D%2C%22abstractNote%22%3A%22The%20automation%20of%20map%20generalization%20has%20been%20an%20important%20research%20subject%20for%20decades%20but%20is%20not%20fully%20solved%20yet.%20Deep%20learning%20techniques%20are%20designed%20for%20various%20image%20generation%20tasks%2C%20so%20one%20may%20think%20that%20it%20would%20be%20possible%20to%20apply%20these%20techniques%20to%20cartography%20and%20train%20a%20holistic%20model%20for%20end-to-end%20map%20generalization.%20On%20the%20contrary%2C%20we%20assume%20that%20map%20generalization%20is%20a%20task%20too%20complex%20to%20be%20learnt%20with%20a%20unique%20model.%20Thus%2C%20in%20this%20article%2C%20we%20propose%20to%20resort%20to%20past%20research%20on%20map%20generalization%20and%20to%20separate%20map%20generalization%20into%20simpler%20sub-tasks%2C%20each%20of%20which%20can%20be%20more%20easily%20resolved%20by%20a%20deep%20neural%20network.%20Our%20main%20contribution%20is%20a%20workflow%20of%20deep%20models%2C%20called%20DeepMapScaler%2C%20which%20achieves%20a%20step-by-step%20topographic%20map%20generalization%20from%20detailed%20topographic%20data.%20First%2C%20we%20implement%20this%20workflow%20to%20generalize%20topographic%20maps%20containing%20roads%2C%20buildings%2C%20and%20rivers%20at%20a%20medium%20scale%20%281%3A50k%29%20from%20a%20detailed%20dataset.%20The%20results%20of%20each%20step%20are%20quantitatively%20and%20visually%20evaluated.%20Then%20the%20generalized%20images%20are%20compared%20with%20the%20generalization%20performed%20using%20a%20holistic%20model%20for%20an%20end-to-end%20map%20generalization%20and%20a%20traditional%20semi-automatic%20map%20generalization%20process.%20The%20experiment%20shows%20that%20the%20workflow%20approach%20is%20more%20promising%20than%20the%20holistic%20model%2C%20as%20each%20sub-task%20is%20specialized%20and%20fine-tuned%20accordingly.%20However%2C%20the%20results%20still%20do%20not%20reach%20the%20quality%20level%20of%20the%20semi-automatic%20traditional%20map%20generalization%20process%2C%20as%20some%20sub-tasks%20are%20more%20complex%20to%20handle%20with%20neural%20networks.%22%2C%22date%22%3A%222024-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2023.2267419%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2267419%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-18T12%3A53%3A12Z%22%7D%7D%2C%7B%22key%22%3A%22PIM5X79V%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhou%20et%20al.%22%2C%22parsedDate%22%3A%222023-08-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EZhou%2C%20Z.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0924271623001697%27%3EMove%20and%20remove%3A%20Multi-task%20learning%20for%20building%20simplification%20in%20vector%20maps%20with%20a%20graph%20convolutional%20neural%20network%3C%5C%2Fa%3E.%202023%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Move%20and%20remove%3A%20Multi-task%20learning%20for%20building%20simplification%20in%20vector%20maps%20with%20a%20graph%20convolutional%20neural%20network%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhiyong%22%2C%22lastName%22%3A%22Zhou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cheng%22%2C%22lastName%22%3A%22Fu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robert%22%2C%22lastName%22%3A%22Weibel%22%7D%5D%2C%22abstractNote%22%3A%22Simplification%20of%20building%20footprints%20is%20an%20essential%20task%20in%20topographic%20map%20generalization%20from%20large%20to%20medium%20scales.%20The%20traditional%20rule-%20or%20constraint-based%20algorithms%20commonly%20require%20cartographers%20to%20enumerate%20and%20formalize%20as%20many%20scenarios%20as%20possible.%20Recently%2C%20some%20studies%20have%20introduced%20deep%20learning%20to%20image%20map%20generalization%2C%20whose%20outputs%2C%20however%2C%20may%20exhibit%20deformed%20boundaries%20due%20to%20pure%20image%20input.%20Vector%20maps%20are%20thus%20a%20reasonable%20solution%20to%20avoid%20such%20issues%20because%20of%20their%20accurate%2C%20object-based%20geometric%20representation.%20However%2C%20few%20existing%20studies%20have%20aimed%20to%20simplify%20buildings%20in%20vector%20maps%20with%20the%20help%20of%20neural%20networks.%20Building%20simplification%20in%20vector%20maps%20can%20be%20regarded%20as%20the%20joint%20contribution%20from%20two%20elementary%20operations%20on%20vertices%20of%20building%20polygons%3A%20remove%20redundant%20vertices%20and%20move%20kept%20vertices.%20This%20research%20proposes%20a%20multi-task%20learning%20method%20with%20graph%20convolutional%20neural%20networks.%20The%20proposed%20method%20formulates%20the%20building%20simplification%20problem%20as%20a%20joint%20task%20of%20node%20removal%20classification%20and%20node%20movement%20regression.%20A%20multi-task%20graph%20convolutional%20neural%20network%20model%20%28MT_GCNN%29%20is%20developed%20to%20learn%20node%20removal%20and%20movement%20simultaneously.%20The%20model%20was%20evaluated%20with%20a%20map%20from%20Stuttgart%2C%20Germany%20that%20contains%208494%20buildings%20generalized%20from%20the%20source%20scale%20of%201%3A5%2C000%20to%20the%20target%20scale%20of%201%3A10%2C000.%20The%20experimental%20results%20show%20that%20the%20proposed%20method%20can%20generate%2080%25%20of%20the%20buildings%20with%20positional%20errors%20of%20less%20than%200.2%20m%2C%2095%25%20with%20a%20shape%20difference%20under%200.5%2C%20and%20around%2098%25%20with%20an%20area%20difference%20under%200.1%20of%20IoU%2C%20compared%20to%20the%20ground%20truth%20target%20buildings%2C%20thus%20demonstrating%20the%20feasibility%20of%20the%20proposed%20method.%20The%20code%20is%20available%20at%3A%20https%3A%5C%2F%5C%2Fgithub.com%5C%2Fchouisgiser%5C%2FMapGeneralizer.%22%2C%22date%22%3A%222023-08-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.isprsjprs.2023.06.004%22%2C%22ISSN%22%3A%220924-2716%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0924271623001697%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-09-07T20%3A44%3A36Z%22%7D%7D%2C%7B%22key%22%3A%22TAS79G64%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhou%20et%20al.%22%2C%22parsedDate%22%3A%222022-09-14%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EZhou%2C%20Z.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.abstr-int-cartogr-assoc.net%5C%2F5%5C%2F86%5C%2F2022%5C%2F%27%3EBuilding%20simplification%20of%20vector%20maps%20using%20graph%20convolutional%20neural%20networks%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Building%20simplification%20of%20vector%20maps%20using%20graph%20convolutional%20neural%20networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhiyong%22%2C%22lastName%22%3A%22Zhou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cheng%22%2C%22lastName%22%3A%22Fu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robert%22%2C%22lastName%22%3A%22Weibel%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222022%5C%2F09%5C%2F14%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.5194%5C%2Fica-abs-5-86-2022%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.abstr-int-cartogr-assoc.net%5C%2F5%5C%2F86%5C%2F2022%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A56%3A55Z%22%7D%7D%2C%7B%22key%22%3A%22BAWBLWXJ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Courtial%20et%20al.%22%2C%22parsedDate%22%3A%222022-06-10%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ECourtial%2C%20A.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fagile-giss.copernicus.org%5C%2Farticles%5C%2F3%5C%2F32%5C%2F2022%5C%2F%27%3ERepresenting%20Vector%20Geographic%20Information%20As%20a%20Tensor%20for%20Deep%20Learning%20Based%20Map%20Generalisation%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Representing%20Vector%20Geographic%20Information%20As%20a%20Tensor%20for%20Deep%20Learning%20Based%20Map%20Generalisation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Azelle%22%2C%22lastName%22%3A%22Courtial%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guillaume%22%2C%22lastName%22%3A%22Touya%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiang%22%2C%22lastName%22%3A%22Zhang%22%7D%5D%2C%22abstractNote%22%3A%22Recently%2C%20many%20researchers%20tried%20to%20generate%20%28generalised%29%20maps%20using%20deep%20learning%2C%20and%20most%20of%20the%20proposed%20methods%20deal%20with%20deep%20neural%20network%20architecture%20choices.%20Deep%20learning%20learns%20to%20reproduce%20examples%2C%20so%20we%20think%20that%20improving%20the%20training%20examples%2C%20and%20especially%20the%20representation%20of%20the%20initial%20geographic%20information%2C%20is%20the%20key%20issue%20for%20this%20problem.%20Our%20article%20extracts%20some%20representation%20issues%20from%20a%20literature%20review%20and%20proposes%20different%20ways%20to%20represent%20vector%20geographic%20information%20as%20a%20tensor.We%20propose%20two%20kinds%20of%20contributions%3A%201%29%20the%20representation%20of%20information%20by%20layers%3B%202%29%20the%20representation%20of%20additional%20information.%20Then%2C%20we%20demonstrate%20the%20interest%20of%20some%20of%20our%20propositions%20with%20experiments%20that%20show%20a%20visual%20improvement%20for%20the%20generation%20of%20generalised%20topographic%20maps%20in%20urban%20areas.%22%2C%22date%22%3A%222022%5C%2F06%5C%2F10%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.5194%5C%2Fagile-giss-3-32-2022%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fagile-giss.copernicus.org%5C%2Farticles%5C%2F3%5C%2F32%5C%2F2022%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A48%3A06Z%22%7D%7D%2C%7B%22key%22%3A%22X7VCA2LP%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yan%20et%20al.%22%2C%22parsedDate%22%3A%222022-02-20%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EYan%2C%20X.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27http%3A%5C%2F%5C%2Fxb.chinasmp.com%5C%2Farticle%5C%2F2022%5C%2F1001-1595%5C%2F2022-2-269.htm%27%3EAn%20adaptive%20building%20simplification%20approach%20based%20on%20shape%20analysis%20and%20representation%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22An%20adaptive%20building%20simplification%20approach%20based%20on%20shape%20analysis%20and%20representation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiongfeng%22%2C%22lastName%22%3A%22Yan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tuo%22%2C%22lastName%22%3A%22Yuan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kong%22%2C%22lastName%22%3A%22Bo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pengcheng%22%2C%22lastName%22%3A%22Liu%22%7D%5D%2C%22abstractNote%22%3A%22%5Cu5efa%5Cu7b51%5Cu7269%5Cu5316%5Cu7b80%5Cu662f%5Cu5730%5Cu56fe%5Cu5236%5Cu56fe%5Cu9886%5Cu57df%5Cu5173%5Cu6ce8%5Cu7684%5Cu70ed%5Cu70b9%5Cu95ee%5Cu9898%5Cu4e4b%5Cu4e00%5Cu3002%5Cu96c6%5Cu6210%5Cu4e0d%5Cu540c%5Cu7b97%5Cu6cd5%5Cu6784%5Cu5efa%5Cu5f62%5Cu72b6%5Cu7279%5Cu5f81%5Cu81ea%5Cu9002%5Cu5e94%5Cu7684%5Cu5316%5Cu7b80%5Cu6a21%5Cu578b%5Cu662f%5Cu5e94%5Cu5bf9%5Cu5efa%5Cu7b51%5Cu7269%5Cu591a%5Cu6837%5Cu5316%5Cu5f62%5Cu6001%5Cu7684%5Cu6709%5Cu6548%5Cu7b56%5Cu7565%5Cuff0c%5Cu4f46%5Cu5f53%5Cu524d%5Cu76f8%5Cu5173%5Cu7814%5Cu7a76%5Cu4e3b%5Cu8981%5Cu4ece%5Cu5c40%5Cu90e8%5Cu7ed3%5Cu6784%5Cu6a21%5Cu5f0f%5Cu6216%5Cu5316%5Cu7b80%5Cu7ed3%5Cu679c%5Cu8bc4%5Cu4ef7%5Cu5c55%5Cu5f00%5Cuff0c%5Cu7f3a%5Cu4e4f%5Cu5bf9%5Cu5f62%5Cu72b6%5Cu7ed3%5Cu6784%5Cu7684%5Cu6574%5Cu4f53%5Cu5206%5Cu6790%5Cu89c6%5Cu89d2%5Cu548c%5Cu6df1%5Cu5c42%5Cu6b21%5Cu8ba4%5Cu77e5%5Cu3002%5Cu672c%5Cu6587%5Cu63d0%5Cu51fa%5Cu4e00%5Cu79cd%5Cu6df1%5Cu5ea6%5Cu5b66%5Cu4e60%5Cu652f%5Cu6301%5Cu4e0b%5Cu7684%5Cu5f62%5Cu72b6%5Cu81ea%5Cu9002%5Cu5e94%5Cu5efa%5Cu7b51%5Cu7269%5Cu5316%5Cu7b80%5Cu65b9%5Cu6cd5%5Cu3002%5Cu9996%5Cu5148%5Cuff0c%5Cu5229%5Cu7528%5Cu56fe%5Cu5377%5Cu79ef%5Cu81ea%5Cu7f16%5Cu7801%5Cu7f51%5Cu7edc%5Cu5bf9%5Cu5efa%5Cu7b51%5Cu7269%5Cu5f62%5Cu72b6%5Cu8fdb%5Cu884c%5Cu6df1%5Cu5ea6%5Cu8ba4%5Cu77e5%5Cuff0c%5Cu63d0%5Cu53d6%5Cu9690%5Cu542b%5Cu5728%5Cu8fb9%5Cu754c%5Cu8282%5Cu70b9%5Cu5206%5Cu5e03%5Cu4e2d%5Cu7684%5Cu5f62%5Cu72b6%5Cu7279%5Cu5f81%5Cu5e76%5Cu8fdb%5Cu884c%5Cu7f16%5Cu7801%5Cu8868%5Cu8fbe%5Cuff1b%5Cu7136%5Cu540e%5Cuff0c%5Cu901a%5Cu8fc7%5Cu76d1%5Cu7763%5Cu5b66%5Cu4e60%5Cu65b9%5Cu6cd5%5Cu5efa%5Cu7acb%5Cu5f62%5Cu72b6%5Cu7f16%5Cu7801%5Cu4e0e%5Cu5316%5Cu7b80%5Cu7b97%5Cu6cd5%5Cu4e4b%5Cu95f4%5Cu7684%5Cu6620%5Cu5c04%5Cu5173%5Cu7cfb%5Cuff0c%5Cu4ece%5Cu800c%5Cu5b9e%5Cu73b0%5Cu4f9d%5Cu636e%5Cu8f93%5Cu5165%5Cu5efa%5Cu7b51%5Cu7269%5Cu7684%5Cu5f62%5Cu72b6%5Cu7279%5Cu5f81%5Cu9009%5Cu62e9%5Cu9002%5Cu5b9c%5Cu5316%5Cu7b80%5Cu7b97%5Cu6cd5%5Cu7684%5Cu81ea%5Cu9002%5Cu5e94%5Cu673a%5Cu5236%5Cu3002%5Cu8bd5%5Cu9a8c%5Cu8868%5Cu660e%5Cuff0c%5Cu672c%5Cu6587%5Cu65b9%5Cu6cd5%5Cu7684%5Cu5316%5Cu7b80%5Cu7ed3%5Cu679c%5Cu5728%5Cu4f4d%5Cu7f6e%5Cu3001%5Cu65b9%5Cu5411%5Cu3001%5Cu9762%5Cu79ef%5Cu548c%5Cu5f62%5Cu72b6%5Cu4fdd%5Cu6301%5Cu6307%5Cu6807%5Cu4e0a%5Cu603b%5Cu4f53%5Cu4f18%5Cu4e8e%5Cu5355%5Cu4e00%5Cu7b97%5Cu6cd5%5Cuff0c%5Cu5177%5Cu5907%5Cu8f83%5Cu597d%5Cu7684%5Cu7406%5Cu8bba%5Cu4e0e%5Cu5e94%5Cu7528%5Cu4ef7%5Cu503c%5Cu3002%22%2C%22date%22%3A%222022-02-20%22%2C%22language%22%3A%22cn%22%2C%22DOI%22%3A%2210.11947%5C%2Fj.AGCS.2022.20210302%22%2C%22ISSN%22%3A%222021-0302%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fxb.chinasmp.com%5C%2Farticle%5C%2F2022%5C%2F1001-1595%5C%2F2022-2-269.htm%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A53%3A18Z%22%7D%7D%2C%7B%22key%22%3A%228XFZHMEL%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Courtial%20et%20al.%22%2C%22parsedDate%22%3A%222021-06-30%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ECourtial%2C%20A.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.int-arch-photogramm-remote-sens-spatial-inf-sci.net%5C%2FXLIII-B4-2021%5C%2F15%5C%2F2021%5C%2F%27%3EGenerative%20adversarial%20networks%20to%20generalise%20urban%20areas%20in%20topographic%20maps%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Generative%20adversarial%20networks%20to%20generalise%20urban%20areas%20in%20topographic%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22A.%22%2C%22lastName%22%3A%22Courtial%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22G.%22%2C%22lastName%22%3A%22Touya%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22X.%22%2C%22lastName%22%3A%22Zhang%22%7D%5D%2C%22abstractNote%22%3A%22%3Cp%3E%3Cstrong%20class%3D%5C%22journal-contentHeaderColor%5C%22%3EAbstract.%3C%5C%2Fstrong%3E%20This%20article%20presents%20how%20a%20generative%20adversarial%20network%20%28GAN%29%20can%20be%20employed%20to%20produce%20a%20generalised%20map%20that%20combines%20several%20cartographic%20themes%20in%20the%20dense%20context%20of%20urban%20areas.%20We%20use%20as%20input%20detailed%20buildings%2C%20roads%2C%20and%20rivers%20from%20topographic%20datasets%20produced%20by%20the%20French%20national%20mapping%20agency%20%28IGN%29%2C%20and%20we%20expect%20as%20output%20of%20the%20GAN%20a%20legible%20map%20of%20these%20elements%20at%20a%20target%20scale%20of%201%3A50%2C000.%20This%20level%20of%20detail%20requires%20to%20reduce%20the%20amount%20of%20information%20while%20preserving%20patterns%3B%20covering%20dense%20inner%20cities%20block%20by%20a%20unique%20polygon%20is%20also%20necessary%20because%20these%20blocks%20cannot%20be%20represented%20with%20enlarged%20individual%20buildings.%20The%20target%20map%20has%20a%20style%20similar%20to%20the%20topographic%20map%20produced%20by%20IGN.%20This%20experiment%20succeeded%20in%20producing%20image%20tiles%20that%20look%20like%20legible%20maps.%20It%20also%20highlights%20the%20impact%20of%20data%20and%20representation%20choices%20on%20the%20quality%20of%20predicted%20images%2C%20and%20the%20challenge%20of%20learning%20geographic%20relationships.%3C%5C%2Fp%3E%22%2C%22date%22%3A%222021%5C%2F06%5C%2F30%22%2C%22proceedingsTitle%22%3A%22The%20International%20Archives%20of%20the%20Photogrammetry%2C%20Remote%20Sensing%20and%20Spatial%20Information%20Sciences%22%2C%22conferenceName%22%3A%22XXIV%20ISPRS%20Congress%20%3Cq%3EImaging%20today%2C%20foreseeing%20tomorrow%3C%5C%2Fq%3E%2C%20Commission%20IV%20-%202021%20edition%2C%205%26ndash%3B9%20July%202021%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.5194%5C%2Fisprs-archives-XLIII-B4-2021-15-2021%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.int-arch-photogramm-remote-sens-spatial-inf-sci.net%5C%2FXLIII-B4-2021%5C%2F15%5C%2F2021%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A01%3A48Z%22%7D%7D%2C%7B%22key%22%3A%22LXM87X5J%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wu%20et%20al.%22%2C%22parsedDate%22%3A%222019-07-10%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EWu%2C%20Y.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.proc-int-cartogr-assoc.net%5C%2F2%5C%2F147%5C%2F2019%5C%2F%27%3EApplication%20of%20Deep%20Learning%20for%203D%20building%20generalization%3C%5C%2Fa%3E.%202019%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Application%20of%20Deep%20Learning%20for%203D%20building%20generalization%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yue%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yevgeniya%22%2C%22lastName%22%3A%22Filippovska%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Valentina%22%2C%22lastName%22%3A%22Schmidt%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Martin%22%2C%22lastName%22%3A%22Kada%22%7D%5D%2C%22abstractNote%22%3A%22%3Cp%3E%3Cstrong%3EAbstract.%3C%5C%2Fstrong%3E%20The%20generalization%20of%203D%20buildings%20is%20a%20challenging%20task%2C%20which%20needs%20to%20consider%20geometry%20information%2C%20semantic%20content%20and%20topology%20relations%20of%203D%20buildings.%20Although%20many%20algorithms%20with%20detailed%20and%20reasonable%20designs%20have%20been%20developed%20for%20the%203D%20building%20generalization%2C%20there%20are%20still%20cases%20that%20could%20be%20further%20studied.%20As%20a%20fast-growing%20technique%2C%20Deep%20Learning%20has%20shown%20its%20ability%20to%20build%20complex%20concepts%20out%20of%20simpler%20concepts%20in%20many%20fields.%20Therefore%2C%20in%20this%20paper%2C%20Deep%20Learning%20is%20used%20to%20solve%20the%20regression%20%28generalization%20of%20individual%203D%20building%29%20and%20classification%20problems%20%28classification%20of%20roof%20type%29%20simultaneously.%20Firstly%2C%20the%20test%20dataset%20is%20generated%20and%20labelled%20with%20the%20generalization%20results%20as%20well%20as%20the%20classification%20of%20roof%20types.%20Buildings%20with%20saddleback%2C%20half-hip%2C%20and%20hip%20roof%20are%20selected%20as%20the%20experimental%20objects%20since%20their%20generalization%20results%20can%20be%20uniformly%20represented%20by%20a%20common%20vector%20which%20aims%20to%20meet%20the%20compatible%20representation%20of%20Deep%20Learning.%20Then%2C%20the%20pre-trained%20ResNet50%20is%20used%20as%20the%20baseline%20network.%20The%20optimal%20model%20capacity%20is%20searched%20within%20an%20extensive%20ablation%20study%20in%20the%20framework%20of%20the%20building%20generalization%20problem.%20After%20that%2C%20a%20multi-task%20network%20is%20built%20by%20adding%20a%20branch%20of%20classification%20to%20the%20above%20network%2C%20which%20is%20in%20parallel%20with%20the%20generalization%20branch.%20In%20the%20process%20of%20training%2C%20the%20imbalance%20problems%20of%20tasks%20and%20classes%20are%20solved%20by%20adjusting%20their%20donations%20to%20the%20total%20loss%20function.%20It%20is%20found%20that%20less%20error%20rate%20is%20obtained%20after%20adding%20a%20classification%20branch.%20For%20the%20final%20results%2C%20two%20improved%20metrics%20are%20used%20to%20evaluate%20the%20generalization%20performance.%20The%20accuracy%20of%20generalization%20reached%20over%2095%25%20for%20horizontal%20information%20and%2085%25%20for%20height%2C%20while%20the%20accuracy%20of%20classification%20reached%20100%25%20on%20the%20test%20dataset.%3C%5C%2Fp%3E%22%2C%22date%22%3A%222019%5C%2F07%5C%2F10%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.5194%5C%2Fica-proc-2-147-2019%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.proc-int-cartogr-assoc.net%5C%2F2%5C%2F147%5C%2F2019%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A46%3A27Z%22%7D%7D%2C%7B%22key%22%3A%22BFMCHSK4%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Feng%20et%20al.%22%2C%22parsedDate%22%3A%222019-06%22%2C%22numChildren%22%3A3%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EFeng%2C%20Y.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F8%5C%2F6%5C%2F258%27%3ELearning%20Cartographic%20Building%20Generalization%20with%20Deep%20Convolutional%20Neural%20Networks%3C%5C%2Fa%3E.%202019%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Learning%20Cartographic%20Building%20Generalization%20with%20Deep%20Convolutional%20Neural%20Networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yu%22%2C%22lastName%22%3A%22Feng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Frank%22%2C%22lastName%22%3A%22Thiemann%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Monika%22%2C%22lastName%22%3A%22Sester%22%7D%5D%2C%22abstractNote%22%3A%22Cartographic%20generalization%20is%20a%20problem%2C%20which%20poses%20interesting%20challenges%20to%20automation.%20Whereas%20plenty%20of%20algorithms%20have%20been%20developed%20for%20the%20different%20sub-problems%20of%20generalization%20%28e.g.%2C%20simplification%2C%20displacement%2C%20aggregation%29%2C%20there%20are%20still%20cases%2C%20which%20are%20not%20generalized%20adequately%20or%20in%20a%20satisfactory%20way.%20The%20main%20problem%20is%20the%20interplay%20between%20different%20operators.%20In%20those%20cases%20the%20human%20operator%20is%20the%20benchmark%2C%20who%20is%20able%20to%20design%20an%20aesthetic%20and%20correct%20representation%20of%20the%20physical%20reality.%20Deep%20learning%20methods%20have%20shown%20tremendous%20success%20for%20interpretation%20problems%20for%20which%20algorithmic%20methods%20have%20deficits.%20A%20prominent%20example%20is%20the%20classification%20and%20interpretation%20of%20images%2C%20where%20deep%20learning%20approaches%20outperform%20traditional%20computer%20vision%20methods.%20In%20both%20domains-computer%20vision%20and%20cartography-humans%20are%20able%20to%20produce%20good%20solutions.%20A%20prerequisite%20for%20the%20application%20of%20deep%20learning%20is%20the%20availability%20of%20many%20representative%20training%20examples%20for%20the%20situation%20to%20be%20learned.%20As%20this%20is%20given%20in%20cartography%20%28there%20are%20many%20existing%20map%20series%29%2C%20the%20idea%20in%20this%20paper%20is%20to%20employ%20deep%20convolutional%20neural%20networks%20%28DCNNs%29%20for%20cartographic%20generalizations%20tasks%2C%20especially%20for%20the%20task%20of%20building%20generalization.%20Three%20network%20architectures%2C%20namely%20U-net%2C%20residual%20U-net%20and%20generative%20adversarial%20network%20%28GAN%29%2C%20are%20evaluated%20both%20quantitatively%20and%20qualitatively%20in%20this%20paper.%20They%20are%20compared%20based%20on%20their%20performance%20on%20this%20task%20at%20target%20map%20scales%201%3A10%2C000%2C%201%3A15%2C000%20and%201%3A25%2C000%2C%20respectively.%20The%20results%20indicate%20that%20deep%20learning%20models%20can%20successfully%20learn%20cartographic%20generalization%20operations%20in%20one%20single%20model%20in%20an%20implicit%20way.%20The%20residual%20U-net%20outperforms%20the%20others%20and%20achieved%20the%20best%20generalization%20performance.%22%2C%22date%22%3A%222019%5C%2F6%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi8060258%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F8%5C%2F6%5C%2F258%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A04%3A30Z%22%7D%7D%2C%7B%22key%22%3A%22U72GKRKD%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Touya%20et%20al.%22%2C%22parsedDate%22%3A%222019-05-04%22%2C%22numChildren%22%3A3%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ETouya%2C%20G.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F23729333.2019.1613071%27%3EIs%20deep%20learning%20the%20new%20agent%20for%20map%20generalization%3F%3C%5C%2Fa%3E%202019%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Is%20deep%20learning%20the%20new%20agent%20for%20map%20generalization%3F%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guillaume%22%2C%22lastName%22%3A%22Touya%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiang%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Imran%22%2C%22lastName%22%3A%22Lokhat%22%7D%5D%2C%22abstractNote%22%3A%22The%20automation%20of%20map%20generalization%20has%20been%20keeping%20researchers%20in%20cartography%20busy%20for%20years.%20Particularly%20great%20progress%20was%20made%20in%20the%20late%2090s%20with%20the%20use%20of%20the%20multi-agent%20paradigm.%20Although%20the%20current%20use%20of%20automatic%20processes%20in%20some%20national%20mapping%20agencies%20is%20a%20great%20achievement%2C%20there%20are%20still%20many%20unsolved%20issues%20and%20research%20seems%20to%20stagnate%20in%20the%20recent%20years.%20With%20the%20success%20of%20deep%20learning%20in%20many%20fields%20of%20science%2C%20including%20geographic%20information%20science%2C%20this%20paper%20poses%20the%20controversial%20question%20of%20the%20title%3A%20is%20deep%20learning%20the%20new%20agent%2C%20i.e.%20the%20technique%20that%20will%20make%20generalization%20research%20bridge%20the%20gap%20to%20fully%20automated%20generalization%20processes%3F%20The%20paper%20neither%20responds%20a%20clear%20yes%20nor%20a%20clear%20no%20but%20discusses%20what%20issues%20could%20be%20tackled%20with%20deep%20learning%20and%20what%20the%20promising%20perspectives.%20Some%20preliminary%20experiments%20with%20building%20generalization%20or%20data%20enrichments%20are%20presented%20to%20support%20the%20discussion.%22%2C%22date%22%3A%222019-05-04%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F23729333.2019.1613071%22%2C%22ISSN%22%3A%222372-9333%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F23729333.2019.1613071%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A06%3A34Z%22%7D%7D%5D%7D
Courtial, A. et al. DeepMapScaler: a workflow of deep neural networks for the generation of generalised maps. 2024
Zhou, Z. et al. Building simplification of vector maps using graph convolutional neural networks. 2022
Courtial, A. et al. Representing Vector Geographic Information As a Tensor for Deep Learning Based Map Generalisation. 2022
Courtial, A. et al. Generative adversarial networks to generalise urban areas in topographic maps. 2021
Wu, Y. et al. Application of Deep Learning for 3D building generalization. 2019
Feng, Y. et al. Learning Cartographic Building Generalization with Deep Convolutional Neural Networks. 2019
Touya, G. et al. Is deep learning the new agent for map generalization? 2019
Abstraction
5447768
abstraction
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22ZK9WNS2K%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Karamatsu%20et%20al.%22%2C%22parsedDate%22%3A%222020-06-08%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EKaramatsu%2C%20T.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3379173.3393708%27%3EIconify%3A%20Converting%20Photographs%20into%20Icons%3C%5C%2Fa%3E.%202020%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Iconify%3A%20Converting%20Photographs%20into%20Icons%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Takuro%22%2C%22lastName%22%3A%22Karamatsu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gibran%22%2C%22lastName%22%3A%22Benitez-Garcia%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Keiji%22%2C%22lastName%22%3A%22Yanai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Seiichi%22%2C%22lastName%22%3A%22Uchida%22%7D%5D%2C%22abstractNote%22%3A%22In%20this%20paper%2C%20we%20tackle%20a%20challenging%20domain%20conversion%20task%20between%20photo%20and%20icon%20images.%20Although%20icons%20often%20originate%20from%20real%20object%20images%20%28i.e.%2C%20photographs%29%2C%20severe%20abstractions%20and%20simplifications%20are%20applied%20to%20generate%20icon%20images%20by%20professional%20graphic%20designers.%20Moreover%2C%20there%20is%20no%20one-to-one%20correspondence%20between%20the%20two%20domains%2C%20for%20this%20reason%20we%20cannot%20use%20it%20as%20the%20ground-truth%20for%20learning%20a%20direct%20conversion%20function.%20Since%20generative%20adversarial%20networks%20%28GAN%29%20can%20undertake%20the%20problem%20of%20domain%20conversion%20without%20any%20correspondence%2C%20we%20test%20CycleGAN%20and%20UNIT%20to%20generate%20icons%20from%20objects%20segmented%20from%20photo%20images.%20Our%20experiments%20with%20several%20image%20datasets%20prove%20that%20CycleGAN%20learns%20sufficient%20abstraction%20and%20simplification%20ability%20to%20generate%20icon-like%20images.%22%2C%22date%22%3A%22Juni%208%2C%202020%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%202020%20Joint%20Workshop%20on%20Multimedia%20Artworks%20Analysis%20and%20Attractiveness%20Computing%20in%20Multimedia%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3379173.3393708%22%2C%22ISBN%22%3A%22978-1-4503-7137-7%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3379173.3393708%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T18%3A29%3A19Z%22%7D%7D%5D%7D
Karamatsu, T. et al. Iconify: Converting Photographs into Icons. 2020
Displacement (Labels)
5447768
displacement, labels
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22S89JBAHI%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Oucheikh%20and%20Harrie%22%2C%22parsedDate%22%3A%222024-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EOucheikh%2C%20R.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2291051%27%3EA%20feasibility%20study%20of%20applying%20generative%20deep%20learning%20models%20for%20map%20labeling%3C%5C%2Fa%3E.%202024%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20feasibility%20study%20of%20applying%20generative%20deep%20learning%20models%20for%20map%20labeling%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rachid%22%2C%22lastName%22%3A%22Oucheikh%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lars%22%2C%22lastName%22%3A%22Harrie%22%7D%5D%2C%22abstractNote%22%3A%22The%20automation%20of%20map%20labeling%20is%20an%20ongoing%20research%20challenge.%20Currently%2C%20the%20map%20labeling%20algorithms%20are%20based%20on%20rules%20defined%20by%20experts%20for%20optimizing%20the%20placement%20of%20the%20text%20labels%20on%20maps.%20In%20this%20paper%2C%20we%20investigate%20the%20feasibility%20of%20using%20well-labeled%20map%20samples%20as%20a%20source%20of%20knowledge%20for%20automating%20the%20labeling%20process.%20The%20basic%20idea%20is%20to%20train%20deep%20learning%20models%2C%20specifically%20the%20generative%20models%20CycleGAN%20and%20Pix2Pix%2C%20on%20a%20large%20number%20of%20map%20examples.%20Then%2C%20the%20trained%20models%20are%20used%20to%20predict%20good%20locations%20of%20the%20labels%20given%20unlabeled%20raster%20maps.%20We%20compare%20the%20results%20obtained%20by%20the%20deep%20learning%20models%20to%20manual%20map%20labeling%20and%20a%20state-of-the-art%20optimization-based%20labeling%20method.%20A%20quantitative%20evaluation%20is%20performed%20in%20terms%20of%20legibility%2C%20association%20and%20map%20readability%20as%20well%20as%20a%20visual%20evaluation%20performed%20by%20three%20professional%20cartographers.%20The%20evaluation%20indicates%20that%20the%20deep%20learning%20models%20are%20capable%20of%20finding%20appropriate%20positions%20for%20the%20labels%2C%20but%20that%20they%2C%20in%20this%20implementation%2C%20are%20not%20well%20suited%20for%20selecting%20the%20labels%20to%20show%20and%20to%20determine%20the%20size%20of%20the%20labels.%20The%20result%20provides%20valuable%20insights%20into%20the%20current%20capabilities%20of%20generative%20models%20for%20such%20task%2C%20while%20also%20identifying%20the%20key%20challenges%20that%20will%20shape%20future%20research%20directions.%22%2C%22date%22%3A%222024-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2023.2291051%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2291051%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-18T12%3A59%3A28Z%22%7D%7D%2C%7B%22key%22%3A%22YVB4IF8V%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Harrie%20et%20al.%22%2C%22parsedDate%22%3A%222022-05-25%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EHarrie%2C%20L.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs41651-022-00115-z%27%3ELabel%20Placement%20Challenges%20in%20City%20Wayfinding%20Map%20Production%5Cu2014Identification%20and%20Possible%20Solutions%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Label%20Placement%20Challenges%20in%20City%20Wayfinding%20Map%20Production%5Cu2014Identification%20and%20Possible%20Solutions%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lars%22%2C%22lastName%22%3A%22Harrie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rachid%22%2C%22lastName%22%3A%22Oucheikh%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22%5Cu00c5sa%22%2C%22lastName%22%3A%22Nilsson%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Andreas%22%2C%22lastName%22%3A%22Oxenstierna%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pontus%22%2C%22lastName%22%3A%22Cederholm%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lai%22%2C%22lastName%22%3A%22Wei%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kai-Florian%22%2C%22lastName%22%3A%22Richter%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Perola%22%2C%22lastName%22%3A%22Olsson%22%7D%5D%2C%22abstractNote%22%3A%22Map%20label%20placement%20is%20an%20important%20task%20in%20map%20production%2C%20which%20needs%20to%20be%20automated%20since%20it%20is%20tedious%20and%20requires%20a%20significant%20amount%20of%20manual%20work.%20In%20this%20paper%2C%20we%20identify%20five%20cartographic%20labeling%20situations%20that%20present%20challenges%20by%20causing%20intensive%20manual%20work%20in%20map%20production%20of%20city%20wayfinding%20maps%2C%20e.g.%2C%20label%20placement%20in%20high%20density%20areas%2C%20utilizing%20true%20label%20geometries%20in%20automated%20methods%2C%20and%20creating%20a%20good%20relationship%20between%20text%20labels%20and%20icons.%20We%20evaluate%20these%20challenges%20in%20an%20open%20source%20map%20labeling%20tool%20%28QGIS%29%2C%20provide%20results%20from%20a%20preliminary%20study%2C%20and%20discuss%20if%20there%20are%20other%20techniques%20that%20could%20be%20applicable%20to%20solving%20these%20challenges.%20These%20techniques%20are%20based%20on%20quantified%20cartographic%20rules%20or%20on%20machine%20learning.%20We%20focus%20on%20deep%20learning%20for%20which%20we%20provide%20several%20examples%20of%20techniques%20from%20other%20application%20domains%20that%20might%20have%20a%20potential%20in%20map%20label%20placement.%20The%20aim%20of%20the%20paper%20is%20to%20explore%20those%20techniques%20and%20to%20recommend%20future%20practical%20studies%20for%20each%20of%20the%20identified%20five%20challenges%20in%20map%20production.%20We%20believe%20that%20targeting%20the%20revealed%20challenges%20using%20the%20proposed%20solutions%20will%20significantly%20raise%20the%20automation%20level%20for%20producing%20city%20wayfinding%20maps%2C%20thus%2C%20having%20a%20real%2C%20measurable%20impact%20on%20production%20time%20and%20costs.%22%2C%22date%22%3A%222022-05-25%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2Fs41651-022-00115-z%22%2C%22ISSN%22%3A%222509-8829%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs41651-022-00115-z%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A58%3A33Z%22%7D%7D%2C%7B%22key%22%3A%223F65FJSQ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Lan%20et%20al.%22%2C%22parsedDate%22%3A%222022-01%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ELan%2C%20T.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F1%5C%2F36%27%3EAn%20ANNs-Based%20Method%20for%20Automated%20Labelling%20of%20Schematic%20Metro%20Maps%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22An%20ANNs-Based%20Method%20for%20Automated%20Labelling%20of%20Schematic%20Metro%20Maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tian%22%2C%22lastName%22%3A%22Lan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhilin%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jicheng%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chengyin%22%2C%22lastName%22%3A%22Gong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peng%22%2C%22lastName%22%3A%22Ti%22%7D%5D%2C%22abstractNote%22%3A%22Schematic%20maps%20are%20popular%20for%20representing%20transport%20networks.%20In%20the%20last%20two%20decades%2C%20some%20researchers%20have%20been%20working%20toward%20automated%20generation%20of%20network%20layouts%20%28i.e.%2C%20the%20network%20geometry%20of%20schematic%20maps%29%2C%20while%20automated%20labelling%20of%20schematic%20maps%20is%20not%20well%20considered.%20The%20descriptive-statistics-based%20labelling%20method%2C%20which%20models%20the%20labelling%20space%20by%20defining%20various%20station-based%20line%20relations%20in%20advance%2C%20has%20been%20specially%20developed%20for%20schematic%20maps.%20However%2C%20if%20a%20certain%20station-based%20line%20relation%20is%20not%20predefined%20in%20the%20database%2C%20this%20method%20may%20not%20be%20able%20to%20infer%20suitable%20labelling%20positions%20under%20this%20relation.%20It%20is%20noted%20that%20artificial%20neural%20networks%20%28ANNs%29%20have%20the%20ability%20to%20infer%20unseen%20relations.%20In%20this%20study%2C%20we%20aim%20to%20develop%20an%20ANNs-based%20method%20for%20the%20labelling%20of%20schematic%20metro%20maps.%20Samples%20are%20first%20extracted%20from%20representative%20schematic%20metro%20maps%2C%20and%20then%20they%20are%20employed%20to%20train%20and%20test%20ANNs%20models.%20Five%20types%20of%20attributes%20%28e.g.%2C%20station-based%20line%20relations%29%20are%20used%20as%20inputs%2C%20and%20two%20types%20of%20attributes%20%28i.e.%2C%20directions%20and%20positions%20of%20labels%29%20are%20used%20as%20outputs.%20Experiments%20show%20that%20this%20ANNs-based%20method%20can%20generate%20effective%20and%20satisfactory%20labelling%20results%20in%20the%20testing%20cases.%20Such%20a%20method%20has%20potential%20to%20be%20extended%20for%20the%20labelling%20of%20other%20transport%20networks.%22%2C%22date%22%3A%222022%5C%2F1%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi11010036%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F1%5C%2F36%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A04%3A56Z%22%7D%7D%2C%7B%22key%22%3A%225GY6DG23%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20et%20al.%22%2C%22parsedDate%22%3A%222020-08-24%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ELi%2C%20Y.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.int-arch-photogramm-remote-sens-spatial-inf-sci.net%5C%2FXLIII-B4-2020%5C%2F117%5C%2F2020%5C%2F%27%3EAutomatic%20label%20placement%20of%20area-features%20using%20deep%20learning%3C%5C%2Fa%3E.%202020%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Automatic%20label%20placement%20of%20area-features%20using%20deep%20learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Y.%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22M.%22%2C%22lastName%22%3A%22Sakamoto%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22T.%22%2C%22lastName%22%3A%22Shinohara%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22T.%22%2C%22lastName%22%3A%22Satoh%22%7D%5D%2C%22abstractNote%22%3A%22Label%20placement%20is%20one%20of%20the%20most%20essential%20tasks%20in%20the%20fields%20of%20cartography%20and%20geographic%20information%20systems.%20Numerous%20studies%20have%20been%20conducted%20on%20the%20automatic%20label%20placement%20for%20the%20past%20few%20decades.%20In%20this%20study%2C%20we%20focus%20on%20automatic%20label%20placement%20of%20area-feature%2C%20which%20has%20been%20relatively%20less%20studied%20than%20that%20of%20point-feature%20and%20line-feature.%20Most%20of%20the%20existing%20approaches%20have%20adopted%20a%20rule-based%20algorithm%2C%20and%20there%20are%20limitations%20in%20expressing%20the%20characteristics%20of%20label%20placement%20for%20area-features%20of%20various%20shapes%20utilizing%20handcrafted%20rules%2C%20criteria%2C%20objective%20functions%2C%20etc.%20Hence%2C%20we%20propose%20a%20novel%20approach%20for%20automatic%20label%20placement%20of%20area-feature%20based%20on%20deep%20learning.%20The%20aim%20of%20the%20proposed%20approach%20is%20to%20obtain%20the%20complex%20and%20implicit%20characteristics%20of%20area-feature%20label%20placement%20by%20manual%20operation%20directly%20and%20automatically%20from%20training%20data.%20First%2C%20the%20area-features%20with%20vector%20format%20are%20converted%20into%20a%20binary%20image.%20Then%20a%20key-point%20detection%20model%2C%20which%20simultaneously%20detect%20and%20localize%20specific%20key-points%20from%20an%20image%2C%20is%20applied%20to%20the%20binary%20image%20to%20estimate%20the%20candidate%20positions%20of%20labels.%20Finally%2C%20the%20final%20label%20placement%20positions%20for%20each%20area-feature%20are%20determined%20via%20simple%20post-process.%20To%20evaluate%20the%20proposed%20approach%2C%20the%20experiments%20with%20cadastral%20data%20were%20conducted.%20The%20experimental%20results%20show%20that%20the%20ratios%20of%20the%20estimation%20errors%20within%201.2%20m%20%28corresponding%20to%20one%20pixel%20of%20the%20input%20image%29%20were%2092.6%25%20and%2094.5%25%20in%20the%20center%20and%20upper-left%20placement%20style%2C%20respectively.%20It%20implies%20that%20the%20proposed%20approach%20could%20place%20the%20labels%20for%20area-features%20automatically%20and%20accurately.%22%2C%22date%22%3A%222020-08-24%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.5194%5C%2Fisprs-archives-XLIII-B4-2020-117-2020%22%2C%22ISSN%22%3A%222194-9034%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.int-arch-photogramm-remote-sens-spatial-inf-sci.net%5C%2FXLIII-B4-2020%5C%2F117%5C%2F2020%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A05%3A53Z%22%7D%7D%5D%7D
Oucheikh, R. et al. A feasibility study of applying generative deep learning models for map labeling. 2024
Lan, T. et al. An ANNs-Based Method for Automated Labelling of Schematic Metro Maps. 2022
Li, Y. et al. Automatic label placement of area-features using deep learning. 2020
Georeferencing and Map Registration
5447768
map registration
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22527SHCE4%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wu%20et%20al.%22%2C%22parsedDate%22%3A%222022-11-14%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EWu%2C%20S.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3557918.3565871%27%3EUnsupervised%20historical%20map%20registration%20by%20a%20deformation%20neural%20network%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Unsupervised%20historical%20map%20registration%20by%20a%20deformation%20neural%20network%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sidi%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Raimund%22%2C%22lastName%22%3A%22Schn%5Cu00fcrer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Magnus%22%2C%22lastName%22%3A%22Heitzler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22Image%20registration%20that%20aligns%20multi-temporal%20or%20multi-source%20images%20is%20vital%20for%20tasks%20like%20change%20detection%20and%20image%20fusion.%20Thanks%20to%20the%20advance%20and%20large-scale%20practice%20of%20modern%20surveying%20methods%2C%20multi-temporal%20historical%20maps%20can%20be%20unlocked%20and%20combined%20to%20trace%20object%20changes%20in%20the%20past%2C%20potentially%20supporting%20research%20in%20environmental%20science%2C%20ecology%20and%20urban%20planning%2C%20etc.%20Even%20when%20maps%20are%20geo-referenced%2C%20the%20contained%20geographical%20features%20can%20be%20misaligned%20due%20to%20surveying%2C%20painting%2C%20map%20generalization%2C%20and%20production%20bias.%20In%20our%20work%2C%20we%20adapt%20an%20end-to-end%20unsupervised%20deformation%20network%20that%20couples%20rigid%20and%20non-rigid%20transformations%20to%20align%20scanned%20historical%20map%20sheets%20at%20different%20time%20stamps.%20To%20the%20best%20of%20our%20knowledge%2C%20we%20are%20the%20first%20to%20use%20unsupervised%20deep%20learning%20to%20register%20map%20images.%20We%20address%20the%20sparsity%20of%20map%20features%20by%20introducing%20a%20loss%20based%20on%20distance%20fields.%20When%20aligning%20the%20displaced%20landmark%20locations%20by%20our%20proposed%20method%2C%20the%20results%20are%20promising%20both%20quantitatively%20and%20qualitatively.%20The%20generated%20smooth%20deformation%20grid%20can%20be%20applied%20to%20vector%20features%20directly%20to%20align%20them%20from%20the%20source%20map%20sheet%20to%20the%20target%20map%20sheet.%22%2C%22date%22%3A%22November%2014%2C%202022%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%205th%20ACM%20SIGSPATIAL%20International%20Workshop%20on%20AI%20for%20Geographic%20Knowledge%20Discovery%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3557918.3565871%22%2C%22ISBN%22%3A%22978-1-4503-9532-8%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3557918.3565871%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A31%3A10Z%22%7D%7D%2C%7B%22key%22%3A%22JPMHK8QY%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Feng%20et%20al.%22%2C%22parsedDate%22%3A%222022-07%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EFeng%2C%20J.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9288879%27%3EDeepMM%3A%20Deep%20Learning%20Based%20Map%20Matching%20With%20Data%20Augmentation%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22DeepMM%3A%20Deep%20Learning%20Based%20Map%20Matching%20With%20Data%20Augmentation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jie%22%2C%22lastName%22%3A%22Feng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yong%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kai%22%2C%22lastName%22%3A%22Zhao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhao%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tong%22%2C%22lastName%22%3A%22Xia%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jinglin%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Depeng%22%2C%22lastName%22%3A%22Jin%22%7D%5D%2C%22abstractNote%22%3A%22As%20a%20fundamental%20component%20in%20map%20service%2C%20map%20matching%20is%20of%20great%20importance%20for%20many%20trajectory-based%20applications%2C%20e.g.%2C%20route%20optimization%2C%20traffic%20scheduling%2C%20and%20fleet%20management.%20In%20practice%2C%20Hidden%20Markov%20Model%20and%20its%20variants%20are%20widely%20used%20to%20provide%20accurate%20and%20efficient%20map%20matching%20service.%20However%2C%20HMM-based%20methods%20fail%20to%20utilize%20the%20knowledge%20%28e.g.%2C%20the%20mobility%20pattern%29%20of%20enormous%20trajectory%20big%20data%2C%20which%20are%20useful%20for%20intelligent%20map%20matching.%20Furthermore%2C%20with%20many%20following-up%20works%2C%20they%20are%20still%20easily%20influenced%20by%20the%20common%20noisy%20and%20sparse%20records%20in%20the%20reality.%20In%20this%20paper%2C%20we%20revisit%20the%20map%20matching%20task%20from%20the%20data%20perspective%20and%20propose%20to%20utilize%20the%20great%20power%20of%20massive%20data%20and%20deep%20learning%20to%20solve%20these%20problems.%20Based%20on%20the%20seq2seq%20learning%20framework%2C%20we%20build%20a%20trajectory2road%20model%20with%20attention%20mechanism%20to%20map%20the%20sparse%20and%20noisy%20trajectory%20into%20the%20accurate%20road%20network.%20Different%20from%20previous%20algorithms%2C%20our%20deep%20learning%20based%20model%20complete%20the%20map%20matching%20in%20the%20latent%20space%2C%20which%20provides%20the%20high%20tolerance%20to%20the%20noisy%20trajectory%20and%20also%20enhances%20the%20matching%20with%20the%20knowledge%20of%20mobility%20pattern.%20Extensive%20experiments%20demonstrate%20that%20the%20proposed%20model%20outperforms%20the%20widely%20used%20HMM-based%20methods%20by%20more%20than%2010%20percent%20%28absolute%20accuracy%29%20in%20various%20situations%20especially%20the%20noisy%20and%20sparse%20settings.%22%2C%22date%22%3A%222022-07%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FTMC.2020.3043500%22%2C%22ISSN%22%3A%221558-0660%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9288879%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A20%3A35Z%22%7D%7D%2C%7B%22key%22%3A%22WV6P884E%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Duan%20et%20al.%22%2C%22parsedDate%22%3A%222021-12%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EDuan%2C%20W.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9671657%27%3EA%20Label%20Correction%20Algorithm%20Using%20Prior%20Information%20for%20Automatic%20and%20Accurate%20Geospatial%20Object%20Recognition%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22A%20Label%20Correction%20Algorithm%20Using%20Prior%20Information%20for%20Automatic%20and%20Accurate%20Geospatial%20Object%20Recognition%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weiwei%22%2C%22lastName%22%3A%22Duan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yao-Yi%22%2C%22lastName%22%3A%22Chiang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stefan%22%2C%22lastName%22%3A%22Leyk%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Johannes%20H.%22%2C%22lastName%22%3A%22Uhl%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Craig%20A.%22%2C%22lastName%22%3A%22Knoblock%22%7D%5D%2C%22abstractNote%22%3A%22Thousands%20of%20scanned%20historical%20topographic%20maps%20contain%20valuable%20information%20covering%20long%20periods%20of%20time%2C%20such%20as%20how%20the%20hydrography%20of%20a%20region%20has%20changed%20over%20time.%20Efficiently%20unlocking%20the%20information%20in%20these%20maps%20requires%20training%20a%20geospatial%20objects%20recognition%20system%2C%20which%20needs%20a%20large%20amount%20of%20annotated%20data.%20Overlapping%20geo-referenced%20external%20vector%20data%20with%20topographic%20maps%20according%20to%20their%20coordinates%20can%20annotate%20the%20desired%20objects%5Cu2019%20locations%20in%20the%20maps%20automatically.%20However%2C%20directly%20overlapping%20the%20two%20datasets%20causes%20misaligned%20and%20false%20annotations%20because%20the%20publication%20years%20and%20coordinate%20projection%20systems%20of%20topographic%20maps%20are%20different%20from%20the%20external%20vector%20data.%20We%20propose%20a%20label%20correction%20algorithm%2C%20which%20leverages%20the%20color%20information%20of%20maps%20and%20the%20prior%20shape%20information%20of%20the%20external%20vector%20data%20to%20reduce%20misaligned%20and%20false%20annotations.%20The%20experiments%20show%20that%20the%20precision%20of%20annotations%20from%20the%20proposed%20algorithm%20is%2010%25%20higher%20than%20the%20annotations%20from%20a%20state-of-the-art%20algorithm.%20Consequently%2C%20recognition%20results%20using%20the%20proposed%20algorithm%5Cu2019s%20annotations%20achieve%209%25%20higher%20correctness%20than%20using%20the%20annotations%20from%20the%20state-of-the-art%20algorithm.%22%2C%22date%22%3A%222021-12%22%2C%22proceedingsTitle%22%3A%222021%20IEEE%20International%20Conference%20on%20Big%20Data%22%2C%22conferenceName%22%3A%222021%20IEEE%20International%20Conference%20on%20Big%20Data%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FBigData52589.2021.9671657%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9671657%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A21%3A01Z%22%7D%7D%2C%7B%22key%22%3A%22A9VYFZP3%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Duan%20et%20al.%22%2C%22parsedDate%22%3A%222020-04-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EDuan%2C%20W.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2019.1698742%27%3EAutomatic%20alignment%20of%20contemporary%20vector%20data%20and%20georeferenced%20historical%20maps%20using%20reinforcement%20learning%3C%5C%2Fa%3E.%202020%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Automatic%20alignment%20of%20contemporary%20vector%20data%20and%20georeferenced%20historical%20maps%20using%20reinforcement%20learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weiwei%22%2C%22lastName%22%3A%22Duan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yao-Yi%22%2C%22lastName%22%3A%22Chiang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stefan%22%2C%22lastName%22%3A%22Leyk%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Johannes%20H.%22%2C%22lastName%22%3A%22Uhl%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Craig%20A.%22%2C%22lastName%22%3A%22Knoblock%22%7D%5D%2C%22abstractNote%22%3A%22With%20large%20amounts%20of%20digital%20map%20archives%20becoming%20available%2C%20automatically%20extracting%20information%20from%20scanned%20historical%20maps%20is%20needed%20for%20many%20domains%20that%20require%20long-term%20historical%20geographic%20data.%20Convolutional%20Neural%20Networks%20%28CNN%29%20are%20powerful%20techniques%20that%20can%20be%20used%20for%20extracting%20locations%20of%20geographic%20features%20from%20scanned%20maps%20if%20sufficient%20representative%20training%20data%20are%20available.%20Existing%20spatial%20data%20can%20provide%20the%20approximate%20locations%20of%20corresponding%20geographic%20features%20in%20historical%20maps%20and%20thus%20be%20useful%20to%20annotate%20training%20data%20automatically.%20However%2C%20the%20feature%20representations%2C%20publication%20date%2C%20production%20scales%2C%20and%20spatial%20reference%20systems%20of%20contemporary%20vector%20data%20are%20typically%20very%20different%20from%20those%20of%20historical%20maps.%20Hence%2C%20such%20auxiliary%20data%20cannot%20be%20directly%20used%20for%20annotation%20of%20the%20precise%20locations%20of%20the%20features%20of%20interest%20in%20the%20scanned%20historical%20maps.%20This%20research%20introduces%20an%20automatic%20vector-to-raster%20alignment%20algorithm%20based%20on%20reinforcement%20learning%20to%20annotate%20precise%20locations%20of%20geographic%20features%20on%20scanned%20maps.%20This%20paper%20models%20the%20alignment%20problem%20using%20the%20reinforcement%20learning%20framework%2C%20which%20enables%20informed%2C%20efficient%20searches%20for%20matching%20features%20without%20pre-processing%20steps%2C%20such%20as%20extracting%20specific%20feature%20signatures%20%28e.g.%20road%20intersections%29.%20The%20experimental%20results%20show%20that%20our%20algorithm%20can%20be%20applied%20to%20various%20features%20%28roads%2C%20water%20lines%2C%20and%20railroads%29%20and%20achieve%20high%20accuracy.%22%2C%22date%22%3A%222020-04-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2019.1698742%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2019.1698742%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A31%3A40Z%22%7D%7D%5D%7D
Wu, S. et al. Unsupervised historical map registration by a deformation neural network. 2022
Feng, J. et al. DeepMM: Deep Learning Based Map Matching With Data Augmentation. 2022
Inpainting (Lines)
5447768
inpainting, lines
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22LUS3LLLK%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Fang%20et%20al.%22%2C%22parsedDate%22%3A%222022-10-03%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EFang%2C%20Z.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2022.2072849%27%3EA%20topography-aware%20approach%20to%20the%20automatic%20generation%20of%20urban%20road%20networks%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20topography-aware%20approach%20to%20the%20automatic%20generation%20of%20urban%20road%20networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhou%22%2C%22lastName%22%3A%22Fang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiaxin%22%2C%22lastName%22%3A%22Qi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lubin%22%2C%22lastName%22%3A%22Fan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jianqiang%22%2C%22lastName%22%3A%22Huang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ying%22%2C%22lastName%22%3A%22Jin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tianren%22%2C%22lastName%22%3A%22Yang%22%7D%5D%2C%22abstractNote%22%3A%22Existing%20deep-learning%20tools%20for%20road%20network%20generation%20have%20limited%20applications%20in%20flat%20urban%20areas%20due%20to%20their%20overreliance%20on%20the%20geometric%20and%20spatial%20configurations%20of%20street%20networks%20and%20inadequate%20considerations%20of%20topographic%20information.%20This%20paper%20proposes%20a%20new%20method%20of%20street%20network%20generation%20based%20on%20a%20generative%20adversarial%20network%20by%20designing%20a%20pre-positioned%20geo-extractor%20module%20and%20a%20geo-merging%20bypath.%20The%20two%20improvements%20employ%20the%20complementary%20use%20of%20geometric%20configurations%20and%20topographic%20features%20to%20automate%20street%20network%20generation%20in%20both%20flat%20and%20hilly%20urban%20areas.%20Our%20experiments%20demonstrate%20that%20the%20improved%20model%20yields%20a%20more%20realistic%20prediction%20of%20street%20configurations%20than%20conventional%20image%20inpainting%20techniques.%20The%20model%5Cu2019s%20effectiveness%20is%20further%20enhanced%20when%20generating%20streets%20in%20hilly%20areas.%20Furthermore%2C%20the%20geo-extractor%20module%20provides%20insights%20from%20the%20computer%20vision%20perspective%20in%20recognizing%20when%20topographic%20information%20should%20be%20considered%20and%20which%20topographic%20information%20should%20receive%20more%20attention.%22%2C%22date%22%3A%222022-10-03%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2022.2072849%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2022.2072849%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A54%3A26Z%22%7D%7D%2C%7B%22key%22%3A%22386NHISP%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yu%20and%20Chen%22%2C%22parsedDate%22%3A%222022-03-31%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EYu%2C%20W.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2022.2055036%27%3EFilling%20gaps%20of%20cartographic%20polylines%20by%20using%20an%20encoder%5Cu2013decoder%20model%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Filling%20gaps%20of%20cartographic%20polylines%20by%20using%20an%20encoder%5Cu2013decoder%20model%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenhao%22%2C%22lastName%22%3A%22Yu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yujie%22%2C%22lastName%22%3A%22Chen%22%7D%5D%2C%22abstractNote%22%3A%22Geospatial%20studies%20must%20address%20spatial%20data%20quality%2C%20especially%20in%20data-driven%20research.%20An%20essential%20concern%20is%20how%20to%20fill%20spatial%20data%20gaps%20%28missing%20data%29%2C%20such%20as%20for%20cartographic%20polylines.%20Recent%20advances%20in%20deep%20learning%20have%20shown%20promise%20in%20filling%20holes%20in%20images%20with%20semantically%20plausible%20and%20context-aware%20details.%20In%20this%20paper%2C%20we%20propose%20an%20effective%20framework%20for%20vector-structured%20polyline%20completion%20using%20a%20generative%20model.%20The%20model%20is%20trained%20to%20generate%20the%20contents%20of%20missing%20polylines%20of%20different%20sizes%20and%20shapes%20conditioned%20on%20the%20contexts.%20Specifically%2C%20the%20generator%20can%20compute%20the%20content%20of%20the%20entire%20polyline%20sample%20globally%20and%20produce%20a%20plausible%20prediction%20for%20local%20gaps.%20The%20proposed%20model%20was%20applied%20to%20contour%20data%20for%20validation.%20The%20experiments%20generated%20gaps%20of%20random%20sizes%20at%20random%20locations%20along%20with%20the%20polyline%20samples.%20Qualitative%20and%20quantitative%20evaluations%20show%20that%20our%20model%20can%20fill%20missing%20points%20with%20high%20perceptual%20quality%20and%20adaptively%20handle%20a%20range%20of%20gaps.%20In%20addition%20to%20the%20simulation%20experiment%2C%20two%20case%20studies%20with%20map%20vectorization%20and%20trajectory%20filling%20illustrate%20the%20application%20prospects%20of%20our%20model.%22%2C%22date%22%3A%222022-03-31%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2022.2055036%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2022.2055036%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A55%3A02Z%22%7D%7D%5D%7D
Fang, Z. et al. A topography-aware approach to the automatic generation of urban road networks. 2022
Yu, W. et al. Filling gaps of cartographic polylines by using an encoder–decoder model. 2022
Inpainting (Raster)
5447768
inpainting, raster
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22WQBWT9PJ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22St%5Cu00f6lzle%20et%20al.%22%2C%22parsedDate%22%3A%222022-04%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ESt%5Cu00f6lzle%2C%20M.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9676411%27%3EReconstructing%20Occluded%20Elevation%20Information%20in%20Terrain%20Maps%20With%20Self-Supervised%20Learning%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Reconstructing%20Occluded%20Elevation%20Information%20in%20Terrain%20Maps%20With%20Self-Supervised%20Learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Maximilian%22%2C%22lastName%22%3A%22St%5Cu00f6lzle%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Takahiro%22%2C%22lastName%22%3A%22Miki%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Levin%22%2C%22lastName%22%3A%22Gerdes%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Martin%22%2C%22lastName%22%3A%22Azkarate%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Marco%22%2C%22lastName%22%3A%22Hutter%22%7D%5D%2C%22abstractNote%22%3A%22Accurate%20and%20complete%20terrain%20maps%20enhance%20the%20awareness%20of%20autonomous%20robots%20and%20enable%20safe%20and%20optimal%20path%20planning.%20Rocks%20and%20topography%20often%20create%20occlusions%20and%20lead%20to%20missing%20elevation%20information%20in%20the%20Digital%20Elevation%20Map%20%28DEM%29.%20Currently%2C%20these%20occluded%20areas%20are%20either%20fully%20avoided%20during%20motion%20planning%20or%20the%20missing%20values%20in%20the%20elevation%20map%20are%20filled-in%20using%20traditional%20interpolation%2C%20diffusion%20or%20patch-matching%20techniques.%20These%20methods%20cannot%20leverage%20the%20high-level%20terrain%20characteristics%20and%20the%20geometric%20constraints%20of%20line%20of%20sight%20we%20humans%20use%20intuitively%20to%20predict%20occluded%20areas.%20We%20introduce%20a%20self-supervised%20learning%20approach%20capable%20of%20training%20on%20real-world%20data%20without%20a%20need%20for%20ground-truth%20information%20to%20reconstruct%20the%20occluded%20areas%20in%20the%20DEMs.%20We%20accomplish%20this%20by%20adding%20artificial%20occlusion%20to%20the%20incomplete%20elevation%20maps%20constructed%20on%20a%20real%20robot%20by%20performing%20ray%20casting.%20We%20first%20evaluate%20a%20supervised%20learning%20approach%20on%20synthetic%20data%20for%20which%20we%20have%20the%20full%20ground-truth%20available%20and%20subsequently%20move%20to%20several%20real-world%20datasets.%20These%20real-world%20datasets%20were%20recorded%20during%20exploration%20of%20both%20structured%20and%20unstructured%20terrain%20with%20a%20legged%20robot%2C%20and%20additionally%20in%20a%20planetary%20scenario%20on%20Lunar%20analogue%20terrain.%20We%20state%20a%20significant%20improvement%20compared%20to%20the%20baseline%20methods%20both%20on%20synthetic%20terrain%20and%20for%20the%20real-world%20datasets.%20Our%20neural%20network%20is%20able%20to%20run%20in%20real-time%20on%20both%20CPU%20and%20GPU%20with%20suitable%20sampling%20rates%20for%20autonomous%20ground%20robots.%20We%20motivate%20the%20applicability%20of%20reconstructing%20occlusion%20in%20elevation%20maps%20with%20preliminary%20motion%20planning%20experiments.%22%2C%22date%22%3A%222022-04%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FLRA.2022.3141662%22%2C%22ISSN%22%3A%222377-3766%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9676411%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A21%3A28Z%22%7D%7D%2C%7B%22key%22%3A%227TRKR42T%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20et%20al.%22%2C%22parsedDate%22%3A%222022-02-01%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ELi%2C%20S.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0034425721005381%27%3EIntegrating%20topographic%20knowledge%20into%20deep%20learning%20for%20the%20void-filling%20of%20digital%20elevation%20models%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Integrating%20topographic%20knowledge%20into%20deep%20learning%20for%20the%20void-filling%20of%20digital%20elevation%20models%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sijin%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guanghui%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xinghua%22%2C%22lastName%22%3A%22Cheng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Liyang%22%2C%22lastName%22%3A%22Xiong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guoan%22%2C%22lastName%22%3A%22Tang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Josef%22%2C%22lastName%22%3A%22Strobl%22%7D%5D%2C%22abstractNote%22%3A%22Digital%20elevation%20models%20%28DEMs%29%20contain%20some%20of%20the%20most%20important%20data%20for%20providing%20terrain%20information%20and%20supporting%20environmental%20analyses.%20However%2C%20the%20applications%20of%20DEMs%20are%20significantly%20limited%20by%20data%20voids%2C%20which%20are%20commonly%20found%20in%20regions%20with%20rugged%20terrain.%20We%20propose%20a%20novel%20deep%20learning-based%20strategy%20called%20a%20topographic%20knowledge-constrained%20conditional%20generative%20adversarial%20network%20%28TKCGAN%29%20to%20fill%20data%20voids%20in%20DEMs.%20Shuttle%20Radar%20Topography%20Mission%20%28SRTM%29%20data%20with%20spatial%20resolutions%20of%203%20and%201%20arc-seconds%20are%20used%20in%20experiments%20to%20demonstrate%20the%20applicability%20of%20the%20TKCGAN.%20Qualitative%20topographic%20knowledge%20of%20valleys%20and%20ridges%20is%20transformed%20into%20new%20loss%20functions%20that%20can%20be%20applied%20in%20deep%20learning-based%20algorithms%20and%20constrain%20the%20training%20process.%20The%20results%20show%20that%20the%20TKCGAN%20outperforms%20other%20common%20methods%20in%20filling%20voids%20and%20improves%20the%20elevation%20and%20surface%20slope%20accuracy%20of%20the%20reconstruction%20results.%20The%20performance%20of%20the%20TKCGAN%20is%20stable%20in%20the%20test%20areas%20and%20reduces%20the%20error%20in%20the%20regions%20with%20medium%20and%20high%20surface%20slopes.%20Furthermore%2C%20the%20analysis%20of%20profiles%20indicates%20that%20the%20TKCGAN%20achieves%20better%20performance%20according%20to%20a%20visual%20inspection%20and%20quantitative%20comparison.%20In%20addition%2C%20the%20proposed%20strategy%20can%20be%20applied%20to%20DEMs%20with%20different%20resolutions.%20This%20work%20is%20an%20endeavour%20to%20transform%20topographic%20knowledge%20into%20computer-processable%20rules%20and%20benefits%20future%20research%20related%20to%20terrain%20reconstruction%20and%20modelling.%22%2C%22date%22%3A%222022-02-01%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.rse.2021.112818%22%2C%22ISSN%22%3A%220034-4257%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0034425721005381%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A06%3A46Z%22%7D%7D%2C%7B%22key%22%3A%227E6R2IIA%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhou%20et%20al.%22%2C%22parsedDate%22%3A%222022-01%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EZhou%2C%20G.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2072-4292%5C%2F14%5C%2F5%5C%2F1206%27%3EVoids%20Filling%20of%20DEM%20with%20Multiattention%20Generative%20Adversarial%20Network%20Model%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Voids%20Filling%20of%20DEM%20with%20Multiattention%20Generative%20Adversarial%20Network%20Model%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guoqing%22%2C%22lastName%22%3A%22Zhou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bo%22%2C%22lastName%22%3A%22Song%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peng%22%2C%22lastName%22%3A%22Liang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiasheng%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tao%22%2C%22lastName%22%3A%22Yue%22%7D%5D%2C%22abstractNote%22%3A%22The%20digital%20elevation%20model%20%28DEM%29%20acquired%20through%20photogrammetry%20or%20LiDAR%20usually%20exposes%20voids%20due%20to%20phenomena%20such%20as%20instrumentation%20artifact%2C%20ground%20occlusion%2C%20etc.%20For%20this%20reason%2C%20this%20paper%20proposes%20a%20multiattention%20generative%20adversarial%20network%20model%20to%20fill%20the%20voids.%20In%20this%20model%2C%20a%20multiscale%20feature%20fusion%20generation%20network%20is%20proposed%20to%20initially%20fill%20the%20voids%2C%20and%20then%20a%20multiattention%20filling%20network%20is%20proposed%20to%20recover%20the%20detailed%20features%20of%20the%20terrain%20surrounding%20the%20void%20area%2C%20and%20the%20channel-spatial%20cropping%20attention%20mechanism%20module%20is%20proposed%20as%20an%20enhancement%20of%20the%20network.%20Spectral%20normalization%20is%20added%20to%20each%20convolution%20layer%20in%20the%20discriminator%20network.%20Finally%2C%20the%20training%20of%20the%20model%20by%20a%20combined%20loss%20function%2C%20including%20reconstruction%20loss%20and%20adversarial%20loss%2C%20is%20optimized.%20Three%20groups%20of%20experiments%20with%20four%20different%20types%20of%20terrains%2C%20hillsides%2C%20valleys%2C%20ridges%20and%20hills%2C%20are%20conducted%20for%20validation%20of%20the%20proposed%20model.%20The%20experimental%20results%20show%20that%20%281%29%20the%20structural%20similarity%20surrounding%20terrestrial%20voids%20in%20the%20three%20types%20of%20terrains%20%28i.e.%2C%20hillside%2C%20valley%2C%20and%20ridge%29%20can%20reach%2080%5Cu201390%25%2C%20which%20implies%20that%20the%20DEM%20accuracy%20can%20be%20improved%20by%20at%20least%2010%25%20relative%20to%20the%20traditional%20interpolation%20methods%20%28i.e.%2C%20Kriging%2C%20IDW%2C%20and%20Spline%29%2C%20and%20can%20reach%2057.4%25%2C%20while%20other%20deep%20learning%20models%20%28i.e.%2C%20CE%2C%20GL%20and%20CR%29%20only%20reach%2043.2%25%2C%2017.1%25%20and%2011.4%25%20in%20the%20hilly%20areas%2C%20respectively.%20Therefore%2C%20it%20can%20be%20concluded%20that%20the%20structural%20similarity%20surrounding%20the%20terrestrial%20voids%20filled%20using%20the%20model%20proposed%20in%20this%20paper%20can%20reach%2060%5Cu201390%25%20upon%20the%20types%20of%20terrain%2C%20such%20as%20hillside%2C%20valley%2C%20ridge%2C%20and%20hill.%22%2C%22date%22%3A%222022%5C%2F1%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Frs14051206%22%2C%22ISSN%22%3A%222072-4292%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2072-4292%5C%2F14%5C%2F5%5C%2F1206%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A56%3A29Z%22%7D%7D%2C%7B%22key%22%3A%223IQMWBJ9%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhang%20et%20al.%22%2C%22parsedDate%22%3A%222020-12%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EZhang%2C%20C.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F9%5C%2F12%5C%2F734%27%3EDEM%20Void%20Filling%20Based%20on%20Context%20Attention%20Generation%20Model%3C%5C%2Fa%3E.%202020%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22DEM%20Void%20Filling%20Based%20on%20Context%20Attention%20Generation%20Model%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chunsen%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shu%22%2C%22lastName%22%3A%22Shi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yingwei%22%2C%22lastName%22%3A%22Ge%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hengheng%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weihong%22%2C%22lastName%22%3A%22Cui%22%7D%5D%2C%22abstractNote%22%3A%22The%20digital%20elevation%20model%20%28DEM%29%20generates%20a%20digital%20simulation%20of%20ground%20terrain%20in%20a%20certain%20range%20with%20the%20usage%20of%203D%20point%20cloud%20data.%20It%20is%20an%20important%20source%20of%20spatial%20modeling%20information.%20Due%20to%20various%20reasons%2C%20however%2C%20the%20generated%20DEM%20has%20data%20holes.%20Based%20on%20the%20algorithm%20of%20deep%20learning%2C%20this%20paper%20aims%20to%20train%20a%20deep%20generation%20model%20%28DGM%29%20to%20complete%20the%20DEM%20void%20filling%20task.%20A%20certain%20amount%20of%20DEM%20data%20and%20a%20randomly%20generated%20mask%20are%20taken%20as%20network%20inputs%2C%20along%20which%20the%20reconstruction%20loss%20and%20generative%20adversarial%20network%20%28GAN%29%20loss%20are%20used%20to%20assist%20network%20training%2C%20so%20as%20to%20perceive%20the%20overall%20known%20elevation%20information%2C%20in%20combination%20with%20the%20contextual%20attention%20layer%2C%20and%20generate%20data%20with%20reliability%20to%20fill%20the%20void%20areas.%20The%20experimental%20results%20have%20managed%20to%20show%20that%20this%20method%20has%20good%20feature%20expression%20and%20reconstruction%20accuracy%20in%20DEM%20void%20filling%2C%20which%20has%20been%20proven%20to%20be%20better%20than%20that%20illustrated%20by%20the%20traditional%20interpolation%20method.%22%2C%22date%22%3A%222020%5C%2F12%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi9120734%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F9%5C%2F12%5C%2F734%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A55%3A25Z%22%7D%7D%2C%7B%22key%22%3A%22KRJNLZSC%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Dong%20et%20al.%22%2C%22parsedDate%22%3A%222020-04%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EDong%2C%20G.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F8789526%27%3EFilling%20Voids%20in%20Elevation%20Models%20Using%20a%20Shadow-Constrained%20Convolutional%20Neural%20Network%3C%5C%2Fa%3E.%202020%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Filling%20Voids%20in%20Elevation%20Models%20Using%20a%20Shadow-Constrained%20Convolutional%20Neural%20Network%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guoshuai%22%2C%22lastName%22%3A%22Dong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weimin%22%2C%22lastName%22%3A%22Huang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22William%20A.%20P.%22%2C%22lastName%22%3A%22Smith%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peng%22%2C%22lastName%22%3A%22Ren%22%7D%5D%2C%22abstractNote%22%3A%22We%20explore%20the%20use%20of%20convolutional%20neural%20networks%20%28CNNs%29%20for%20filling%20voids%20in%20digital%20elevation%20models%20%28DEM%29.%20We%20propose%20a%20baseline%20approach%20using%20a%20fully%20convolutional%20network%20to%20predict%20complete%20from%20incomplete%20DEMs%2C%20which%20is%20trained%20in%20a%20supervised%20fashion.%20We%20then%20extend%20this%20to%20a%20shadow-constrained%20CNN%20%28SCCNN%29%20by%20introducing%20additional%20loss%20functions%20that%20encourage%20the%20restored%20DEM%20to%20adhere%20to%20geometric%20constraints%20implied%20by%20cast%20shadows.%20At%20the%20training%20time%2C%20we%20use%20automatically%20extracted%20cast%20shadow%20maps%20and%20known%20sun%20directions%20to%20compute%20the%20shadow-based%20supervisory%20signal%20in%20addition%20to%20the%20direct%20DEM%20supervision.%20At%20the%20test%20time%2C%20our%20network%20directly%20predicts%20restored%20DEMs%20from%20an%20incomplete%20DEM.%20One%20key%20advantage%20of%20our%20SCCNN%20model%20is%20that%20it%20is%20characterized%20by%20both%20CNN%20data%20inference%20and%20geometric%20shadow%20cues.%20It%20thus%20avoids%20data%20restoration%20that%20may%20violate%20shadowing%20conditions.%20Both%20our%20baseline%20CNN%20and%20SCCNN%20outperform%20the%20inverse%20distance%20weighting%20%28IDW%29-based%20interpolation%20method%2C%20with%20the%20shadow%20supervision%20enabling%20SCCNN%20to%20obtain%20the%20best%20performance.%22%2C%22date%22%3A%222020-04%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FLGRS.2019.2926530%22%2C%22ISSN%22%3A%221558-0571%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F8789526%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A22%3A00Z%22%7D%7D%2C%7B%22key%22%3A%22EAJDRC77%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Gavriil%20et%20al.%22%2C%22parsedDate%22%3A%222019-10%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EGavriil%2C%20K.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F8669876%27%3EVoid%20Filling%20of%20Digital%20Elevation%20Models%20With%20Deep%20Generative%20Models%3C%5C%2Fa%3E.%202019%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Void%20Filling%20of%20Digital%20Elevation%20Models%20With%20Deep%20Generative%20Models%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Konstantinos%22%2C%22lastName%22%3A%22Gavriil%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Georg%22%2C%22lastName%22%3A%22Muntingh%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Oliver%20J.%20D.%22%2C%22lastName%22%3A%22Barrowclough%22%7D%5D%2C%22abstractNote%22%3A%22In%20recent%20years%2C%20advances%20in%20machine%20learning%20algorithms%2C%20cheap%20computational%20resources%2C%20and%20the%20availability%20of%20big%20data%20have%20spurred%20the%20deep%20learning%20revolution%20in%20various%20application%20domains.%20In%20particular%2C%20supervised%20learning%20techniques%20in%20image%20analysis%20have%20led%20to%20a%20superhuman%20performance%20in%20various%20tasks%2C%20such%20as%20classification%2C%20localization%2C%20and%20segmentation%2C%20whereas%20unsupervised%20learning%20techniques%20based%20on%20increasingly%20advanced%20generative%20models%20have%20been%20applied%20to%20generate%20high-resolution%20synthetic%20images%20indistinguishable%20from%20real%20images.%20In%20this%20letter%2C%20we%20consider%20a%20state-of-the-art%20machine%20learning%20model%20for%20image%20inpainting%2C%20namely%2C%20a%20Wasserstein%20Generative%20Adversarial%20Network%20based%20on%20a%20fully%20convolutional%20architecture%20with%20a%20contextual%20attention%20mechanism.%20We%20show%20that%20this%20model%20can%20be%20successfully%20transferred%20to%20the%20setting%20of%20digital%20elevation%20models%20for%20the%20purpose%20of%20generating%20semantically%20plausible%20data%20for%20filling%20voids.%20Training%2C%20testing%2C%20and%20experimentation%20are%20done%20on%20GeoTIFF%20data%20from%20various%20regions%20in%20Norway%2C%20made%20openly%20available%20by%20the%20Norwegian%20Mapping%20Authority.%22%2C%22date%22%3A%222019-10%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FLGRS.2019.2902222%22%2C%22ISSN%22%3A%221558-0571%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F8669876%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A22%3A35Z%22%7D%7D%2C%7B%22key%22%3A%22H6YFLXE6%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Qiu%20et%20al.%22%2C%22parsedDate%22%3A%222019-01%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EQiu%2C%20Z.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2072-4292%5C%2F11%5C%2F23%5C%2F2829%27%3EVoid%20Filling%20of%20Digital%20Elevation%20Models%20with%20a%20Terrain%20Texture%20Learning%20Model%20Based%20on%20Generative%20Adversarial%20Networks%3C%5C%2Fa%3E.%202019%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Void%20Filling%20of%20Digital%20Elevation%20Models%20with%20a%20Terrain%20Texture%20Learning%20Model%20Based%20on%20Generative%20Adversarial%20Networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhonghang%22%2C%22lastName%22%3A%22Qiu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Linwei%22%2C%22lastName%22%3A%22Yue%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiuguo%22%2C%22lastName%22%3A%22Liu%22%7D%5D%2C%22abstractNote%22%3A%22Digital%20elevation%20models%20%28DEMs%29%20are%20an%20important%20information%20source%20for%20spatial%20modeling.%20However%2C%20data%20voids%2C%20which%20commonly%20exist%20in%20regions%20with%20rugged%20topography%2C%20result%20in%20incomplete%20DEM%20products%2C%20and%20thus%20significantly%20degrade%20DEM%20data%20quality.%20Interpolation%20methods%20are%20commonly%20used%20to%20fill%20voids%20of%20small%20sizes.%20For%20large-scale%20voids%2C%20multi-source%20fusion%20is%20an%20effective%20solution.%20Nevertheless%2C%20high-quality%20auxiliary%20source%20information%20is%20always%20difficult%20to%20retrieve%20in%20rugged%20mountainous%20areas.%20Thus%2C%20the%20void%20filling%20task%20is%20still%20a%20challenge.%20In%20this%20paper%2C%20we%20proposed%20a%20method%20based%20on%20a%20deep%20convolutional%20generative%20adversarial%20network%20%28DCGAN%29%20to%20address%20the%20problem%20of%20DEM%20void%20filling.%20A%20terrain%20texture%20generation%20model%20%28TTGM%29%20was%20constructed%20based%20on%20the%20DCGAN%20framework.%20Elevation%2C%20terrain%20slope%2C%20and%20relief%20degree%20composed%20the%20samples%20in%20the%20training%20set%20to%20better%20depict%20the%20terrain%20textural%20features%20of%20the%20DEM%20data.%20Moreover%2C%20the%20resize-convolution%20was%20utilized%20to%20replace%20the%20traditional%20deconvolution%20process%20to%20overcome%20the%20staircase%20in%20the%20generated%20data.%20The%20TTGM%20was%20trained%20on%20non-void%20SRTM%20%28Shuttle%20Radar%20Topography%20Mission%29%201-arc-second%20data%20patches%20in%20mountainous%20regions%20collected%20across%20the%20globe.%20Then%2C%20information%20neighboring%20the%20voids%20was%20involved%20in%20order%20to%20infer%20the%20latent%20encoding%20for%20the%20missing%20areas%20approximated%20to%20the%20distribution%20of%20training%20data.%20This%20was%20implemented%20with%20a%20loss%20function%20composed%20of%20pixel-wise%2C%20contextual%2C%20and%20perceptual%20constraints%20during%20the%20reconstruction%20process.%20The%20most%20appropriate%20fill%20surface%20generated%20by%20the%20TTGM%20was%20then%20employed%20to%20fill%20the%20voids%2C%20and%20Poisson%20blending%20was%20performed%20as%20a%20postprocessing%20step.%20Two%20models%20with%20different%20input%20sizes%20%2864%20%5Cu00d7%2064%20and%20128%20%5Cu00d7%20128%20pixels%29%20were%20trained%2C%20so%20the%20proposed%20method%20can%20efficiently%20adapt%20to%20different%20sizes%20of%20voids.%20The%20experimental%20results%20indicate%20that%20the%20proposed%20method%20can%20obtain%20results%20with%20good%20visual%20perception%20and%20reconstruction%20accuracy%2C%20and%20is%20superior%20to%20classical%20interpolation%20methods.%22%2C%22date%22%3A%222019%5C%2F1%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Frs11232829%22%2C%22ISSN%22%3A%222072-4292%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2072-4292%5C%2F11%5C%2F23%5C%2F2829%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A38%3A31Z%22%7D%7D%5D%7D
Stölzle, M. et al. Reconstructing Occluded Elevation Information in Terrain Maps With Self-Supervised Learning. 2022
Zhou, G. et al. Voids Filling of DEM with Multiattention Generative Adversarial Network Model. 2022
Zhang, C. et al. DEM Void Filling Based on Context Attention Generation Model. 2020
Gavriil, K. et al. Void Filling of Digital Elevation Models With Deep Generative Models. 2019
3D Reconstruction
5447768
3D reconstruction
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%2229Z72RGU%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Schn%5Cu00fcrer%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ESchn%5Cu00fcrer%2C%20R.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2224063%27%3EInferring%20implicit%203D%20representations%20from%20human%20figures%20on%20pictorial%20maps%3C%5C%2Fa%3E.%202024%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Inferring%20implicit%203D%20representations%20from%20human%20figures%20on%20pictorial%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Raimund%22%2C%22lastName%22%3A%22Schn%5Cu00fcrer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22A.%20Cengiz%22%2C%22lastName%22%3A%22%5Cu00d6ztireli%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Magnus%22%2C%22lastName%22%3A%22Heitzler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ren%5Cu00e9%22%2C%22lastName%22%3A%22Sieber%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22In%20this%20work%2C%20we%20present%20an%20automated%20workflow%20to%20bring%20human%20figures%2C%20one%20of%20the%20most%20frequently%20appearing%20entities%20on%20pictorial%20maps%2C%20to%20the%20third%20dimension.%20Our%20workflow%20is%20based%20on%20training%20data%20and%20neural%20networks%20for%20single-view%203D%20reconstruction%20of%20real%20humans%20from%20photos.%20We%20first%20let%20a%20network%20consisting%20of%20fully%20connected%20layers%20estimate%20the%20depth%20coordinate%20of%202D%20pose%20points.%20The%20gained%203D%20pose%20points%20are%20inputted%20together%20with%202D%20masks%20of%20body%20parts%20into%20a%20deep%20implicit%20surface%20network%20to%20infer%203D%20signed%20distance%20fields%20%28SDFs%29.%20By%20assembling%20all%20body%20parts%2C%20we%20derive%202D%20depth%20images%20and%20body%20part%20masks%20of%20the%20whole%20figure%20for%20different%20views%2C%20which%20are%20fed%20into%20a%20fully%20convolutional%20network%20to%20predict%20UV%20images.%20These%20UV%20images%20and%20the%20texture%20for%20the%20given%20perspective%20are%20inserted%20into%20a%20generative%20network%20to%20inpaint%20the%20textures%20for%20the%20other%20views.%20The%20textures%20are%20enhanced%20by%20a%20cartoonization%20network%20and%20facial%20details%20are%20resynthesized%20by%20an%20autoencoder.%20Finally%2C%20the%20generated%20textures%20are%20assigned%20to%20the%20inferred%20body%20parts%20in%20a%20ray%20marcher.%20We%20test%20our%20workflow%20with%2012%20pictorial%20human%20figures%20after%20having%20validated%20several%20network%20configurations.%20The%20created%203D%20models%20look%20generally%20promising%2C%20especially%20when%20considering%20the%20challenges%20of%20silhouette-based%203D%20recovery%20and%20real-time%20rendering%20of%20the%20implicit%20SDFs.%20Further%20improvement%20is%20needed%20to%20reduce%20gaps%20between%20the%20body%20parts%20and%20to%20add%20pictorial%20details%20to%20the%20textures.%20Overall%2C%20the%20constructed%20figures%20may%20be%20used%20for%20animation%20and%20storytelling%20in%20digital%203D%20maps.%22%2C%22date%22%3A%222024-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2023.2224063%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2224063%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-18T17%3A45%3A42Z%22%7D%7D%5D%7D
Schnürer, R. et al. Inferring implicit 3D representations from human figures on pictorial maps. 2024
Geolocalisation
5447768
geolocalisation, addresses
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%227QIN65CY%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Qian%20et%20al.%22%2C%22parsedDate%22%3A%222020-12%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EQian%2C%20C.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F9%5C%2F12%5C%2F698%27%3EA%20Coarse-to-Fine%20Model%20for%20Geolocating%20Chinese%20Addresses%3C%5C%2Fa%3E.%202020%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20Coarse-to-Fine%20Model%20for%20Geolocating%20Chinese%20Addresses%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chunyao%22%2C%22lastName%22%3A%22Qian%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chao%22%2C%22lastName%22%3A%22Yi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chengqi%22%2C%22lastName%22%3A%22Cheng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guoliang%22%2C%22lastName%22%3A%22Pu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiashu%22%2C%22lastName%22%3A%22Liu%22%7D%5D%2C%22abstractNote%22%3A%22Address%20geolocation%20aims%20to%20associate%20address%20texts%20to%20the%20geographic%20locations.%20In%20China%2C%20due%20to%20the%20increasing%20demand%20for%20LBS%20applications%20such%20as%20take-out%20services%20and%20express%20delivery%2C%20automatically%20geolocating%20the%20unstructured%20address%20information%20is%20the%20key%20issue%20that%20needs%20to%20be%20solved%20first.%20Recently%2C%20a%20few%20approaches%20have%20been%20proposed%20to%20automate%20the%20address%20geolocation%20by%20directly%20predicting%20geographic%20coordinates.%20However%2C%20such%20point-based%20methods%20ignore%20the%20hierarchy%20information%20in%20addresses%20which%20may%20cause%20poor%20geolocation%20performance.%20In%20this%20paper%2C%20we%20propose%20a%20hierarchical%20region-based%20approach%20for%20geolocating%20Chinese%20addresses.%20We%20model%20the%20address%20geolocation%20as%20a%20Sequence-to-Sequence%20%28Seq2Seq%29%20learning%20task%2C%20that%20is%2C%20the%20input%20sequence%20is%20a%20textual%20address%2C%20and%20the%20output%20sequence%20is%20a%20GeoSOT%20grid%20code%20which%20exactly%20represents%20multi-level%20regions%20covered%20by%20the%20address.%20A%20novel%20coarse-to-fine%20model%2C%20which%20combines%20BERT%20and%20LSTM%2C%20is%20designed%20to%20learn%20the%20task.%20The%20experimental%20results%20demonstrate%20that%20our%20model%20correctly%20understands%20the%20Chinese%20addresses%20and%20achieves%20the%20highest%20geolocation%20accuracy%20among%20all%20the%20baselines.%22%2C%22date%22%3A%222020%5C%2F12%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi9120698%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F9%5C%2F12%5C%2F698%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A34%3A19Z%22%7D%7D%5D%7D
Qian, C. et al. A Coarse-to-Fine Model for Geolocating Chinese Addresses. 2020
Geographic Entity Extraction
5447768
geographic entity extraction
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22ZZRLYFBE%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Mao%20et%20al.%22%2C%22parsedDate%22%3A%222019-11-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EMao%2C%20H.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2F10.1080%5C%2F17538947.2018.1535000%27%3EMapping%20near-real-time%20power%20outages%20from%20social%20media%3C%5C%2Fa%3E.%202019%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Mapping%20near-real-time%20power%20outages%20from%20social%20media%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Huina%22%2C%22lastName%22%3A%22Mao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gautam%22%2C%22lastName%22%3A%22Thakur%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kevin%22%2C%22lastName%22%3A%22Sparks%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jibonananda%22%2C%22lastName%22%3A%22Sanyal%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Budhendra%22%2C%22lastName%22%3A%22Bhaduri%22%7D%5D%2C%22abstractNote%22%3A%22Social%20media%2C%20including%20Twitter%2C%20has%20become%20an%20important%20source%20for%20disaster%20response.%20Yet%20most%20studies%20focus%20on%20a%20very%20limited%20amount%20of%20geotagged%20data%20%28approximately%201%25%20of%20all%20tweets%29%20while%20discarding%20a%20rich%20body%20of%20data%20that%20contains%20location%20expressions%20in%20text.%20Location%20information%20is%20crucial%20to%20understanding%20the%20impact%20of%20disasters%2C%20including%20where%20damage%20has%20occurred%20and%20where%20the%20people%20who%20need%20help%20are%20situated.%20In%20this%20paper%2C%20we%20propose%20a%20novel%20two-stage%20machine%20learning-%20and%20deep%20learning-based%20framework%20for%20power%20outage%20detection%20from%20Twitter.%20First%2C%20we%20apply%20a%20probabilistic%20classification%20model%20using%20bag-of-ngrams%20features%20to%20find%20true%20power%20outage%20tweets.%20Second%2C%20we%20implement%20a%20new%20deep%20learning%20method%5Cu2013bidirectional%20long%20short-term%20memory%20networks%5Cu2013to%20extract%20outage%20locations%20from%20text.%20Results%20show%20a%20promising%20classification%20accuracy%20%2886%25%29%20in%20identifying%20true%20power%20outage%20tweets%2C%20and%20approximately%2020%20times%20more%20usable%20tweets%20can%20be%20located%20compared%20with%20simply%20relying%20on%20geotagged%20tweets.%20The%20method%20of%20identifying%20location%20names%20used%20in%20this%20paper%20does%20not%20require%20language-%20or%20domain-specific%20external%20resources%20such%20as%20gazetteers%20or%20handcrafted%20features%2C%20so%20it%20can%20be%20extended%20to%20other%20situational%20awareness%20analyzes%20and%20new%20applications.%22%2C%22date%22%3A%222019-11-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F17538947.2018.1535000%22%2C%22ISSN%22%3A%221753-8947%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2F10.1080%5C%2F17538947.2018.1535000%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A12%3A24Z%22%7D%7D%5D%7D
Mao, H. et al. Mapping near-real-time power outages from social media. 2019
Object and Phenomenon Detection
5447768
object phenomenon detection
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22CRC5NTAV%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Liang%20et%20al.%22%2C%22parsedDate%22%3A%222025-04-01%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ELiang%2C%20Y.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0198971524001571%27%3EGeoAI-enhanced%20community%20detection%20on%20spatial%20networks%20with%20graph%20deep%20learning%3C%5C%2Fa%3E.%202025%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22GeoAI-enhanced%20community%20detection%20on%20spatial%20networks%20with%20graph%20deep%20learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yunlei%22%2C%22lastName%22%3A%22Liang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiawei%22%2C%22lastName%22%3A%22Zhu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wen%22%2C%22lastName%22%3A%22Ye%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Song%22%2C%22lastName%22%3A%22Gao%22%7D%5D%2C%22abstractNote%22%3A%22Spatial%20networks%20are%20useful%20for%20modeling%20geographic%20phenomena%20where%20spatial%20interaction%20plays%20an%20important%20role.%20To%20analyze%20the%20spatial%20networks%20and%20their%20internal%20structures%2C%20graph-based%20methods%20such%20as%20community%20detection%20have%20been%20widely%20used.%20Community%20detection%20aims%20to%20extract%20strongly%20connected%20components%20from%20the%20network%20and%20reveal%20the%20hidden%20relationships%20between%20nodes%2C%20but%20they%20usually%20do%20not%20involve%20the%20attribute%20information.%20To%20consider%20edge-based%20interactions%20and%20node%20attributes%20together%2C%20this%20study%20proposed%20a%20family%20of%20GeoAI-enhanced%20unsupervised%20community%20detection%20methods%20called%20region2vec%20based%20on%20Graph%20Attention%20Networks%20%28GAT%29%20and%20Graph%20Convolutional%20Networks%20%28GCN%29.%20The%20region2vec%20methods%20generate%20node%20neural%20embeddings%20based%20on%20attribute%20similarity%2C%20geographic%20adjacency%20and%20spatial%20interactions%2C%20and%20then%20extract%20network%20communities%20based%20on%20node%20embeddings%20using%20agglomerative%20clustering.%20The%20proposed%20GeoAI-based%20methods%20are%20compared%20with%20multiple%20baselines%20and%20perform%20the%20best%20when%20one%20wants%20to%20maximize%20node%20attribute%20similarity%20and%20spatial%20interaction%20intensity%20simultaneously%20within%20the%20spatial%20network%20communities.%20It%20is%20further%20applied%20in%20the%20shortage%20area%20delineation%20problem%20in%20public%20health%20and%20demonstrates%20its%20promise%20in%20regionalization%20problems.%22%2C%22date%22%3A%222025-04-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.compenvurbsys.2024.102228%22%2C%22ISSN%22%3A%220198-9715%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0198971524001571%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-12-05T16%3A45%3A08Z%22%7D%7D%2C%7B%22key%22%3A%222PDAITM5%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Valdez%20and%20Godmalin%22%2C%22parsedDate%22%3A%222021%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EValdez%2C%20D.B.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3487923.3487927%27%3EA%20Deep%20Learning%20Approach%20of%20Recognizing%20Natural%20Disasters%20on%20Images%20using%20Convolutional%20Neural%20Network%20and%20Transfer%20Learning%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22A%20Deep%20Learning%20Approach%20of%20Recognizing%20Natural%20Disasters%20on%20Images%20using%20Convolutional%20Neural%20Network%20and%20Transfer%20Learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Daryl%20B.%22%2C%22lastName%22%3A%22Valdez%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rey%20Anthony%20G.%22%2C%22lastName%22%3A%22Godmalin%22%7D%5D%2C%22abstractNote%22%3A%22Natural%20disasters%20are%20uncontrollable%20phenomena%20occurring%20yearly%20which%20cause%20extensive%20damage%20to%20lives%2C%20property%20and%20cause%20permanent%20damage%20to%20the%20environment.%20However%20by%2C%20using%20Deep%20Learning%2C%20real-time%20recognition%20of%20these%20disasters%20can%20help%20the%20victims%20and%20emergency%20response%20agencies%20during%20the%20onset%20of%20these%20destructive%20events.%20At%20present%2C%20there%20are%20still%20gaps%20in%20the%20literature%20regarding%20real-time%20natural%20disaster%20recognition.%20In%20this%20paper%2C%20we%20present%20a%20dataset%20for%20the%20joint%20classification%20of%20natural%20disasters%20and%20intensity.%20We%20also%20proposed%20a%20lightweight%20convolutional%20neural%20network%20with%20two%20classification%20heads%20for%20the%20two%20tasks.%20This%20study%20leveraged%20on%20transfer%20learning%20in%20training%20the%20network%20to%20recognize%20natural%20disasters%2C%20as%20well%20as%20detecting%20normal%2C%20no-disaster%20images.%20At%20the%20same%20time%2C%20it%20is%20also%20capable%20of%20recognizing%20disaster%20intensity.%20Under%20controlled%20conditions%2C%20the%20model%20showed%20promising%20results%20on%20the%20two%20classification%20tasks.%20Thus%2C%20the%20study%20proved%20that%20accurate%20recognition%20of%20natural%20disasters%20is%20possible%20using%20a%20lightweight%20model%20and%20transfer%20learning.%20We%20hope%20that%20this%20study%20would%20lead%20to%20development%20of%20monitoring%20or%20surveillance%20systems%20that%20can%20perform%20accurate%2C%20on-the-ground%2C%20and%20real-time%20recognition%20of%20natural%20disasters%20allowing%20for%20rapid%20emergency%20responses%20mitigating%20the%20loss%20of%20lives%20and%20damages%20to%20properties.%22%2C%22date%22%3A%22Dezember%209%2C%202021%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%20International%20Conference%20on%20Artificial%20Intelligence%20and%20its%20Applications%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3487923.3487927%22%2C%22ISBN%22%3A%22978-1-4503-8575-6%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3487923.3487927%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T19%3A53%3A49Z%22%7D%7D%2C%7B%22key%22%3A%22CAJR76AM%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Feng%20and%20Sester%22%2C%22parsedDate%22%3A%222018-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EFeng%2C%20Y.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F7%5C%2F2%5C%2F39%27%3EExtraction%20of%20Pluvial%20Flood%20Relevant%20Volunteered%20Geographic%20Information%20%28VGI%29%20by%20Deep%20Learning%20from%20User%20Generated%20Texts%20and%20Photos%3C%5C%2Fa%3E.%202018%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Extraction%20of%20Pluvial%20Flood%20Relevant%20Volunteered%20Geographic%20Information%20%28VGI%29%20by%20Deep%20Learning%20from%20User%20Generated%20Texts%20and%20Photos%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yu%22%2C%22lastName%22%3A%22Feng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Monika%22%2C%22lastName%22%3A%22Sester%22%7D%5D%2C%22abstractNote%22%3A%22In%20recent%20years%2C%20pluvial%20floods%20caused%20by%20extreme%20rainfall%20events%20have%20occurred%20frequently.%20Especially%20in%20urban%20areas%2C%20they%20lead%20to%20serious%20damages%20and%20endanger%20the%20citizens%5Cu2019%20safety.%20Therefore%2C%20real-time%20information%20about%20such%20events%20is%20desirable.%20With%20the%20increasing%20popularity%20of%20social%20media%20platforms%2C%20such%20as%20Twitter%20or%20Instagram%2C%20information%20provided%20by%20voluntary%20users%20becomes%20a%20valuable%20source%20for%20emergency%20response.%20Many%20applications%20have%20been%20built%20for%20disaster%20detection%20and%20flood%20mapping%20using%20crowdsourcing.%20Most%20of%20the%20applications%20so%20far%20have%20merely%20used%20keyword%20filtering%20or%20classical%20language%20processing%20methods%20to%20identify%20disaster%20relevant%20documents%20based%20on%20user%20generated%20texts.%20As%20the%20reliability%20of%20social%20media%20information%20is%20often%20under%20criticism%2C%20the%20precision%20of%20information%20retrieval%20plays%20a%20significant%20role%20for%20further%20analyses.%20Thus%2C%20in%20this%20paper%2C%20high%20quality%20eyewitnesses%20of%20rainfall%20and%20flooding%20events%20are%20retrieved%20from%20social%20media%20by%20applying%20deep%20learning%20approaches%20on%20user%20generated%20texts%20and%20photos.%20Subsequently%2C%20events%20are%20detected%20through%20spatiotemporal%20clustering%20and%20visualized%20together%20with%20these%20high%20quality%20eyewitnesses%20in%20a%20web%20map%20application.%20Analyses%20and%20case%20studies%20are%20conducted%20during%20flooding%20events%20in%20Paris%2C%20London%20and%20Berlin.%22%2C%22date%22%3A%222018%5C%2F2%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi7020039%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F7%5C%2F2%5C%2F39%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T19%3A53%3A47Z%22%7D%7D%5D%7D
Liang, Y. et al. GeoAI-enhanced community detection on spatial networks with graph deep learning. 2025
Remote Sensing
5447768
remote sensing
1
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22CLWNEI2J%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20et%20al.%22%2C%22parsedDate%22%3A%222023-11-20%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ELi%2C%20W.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3615886.3627747%27%3EAssessment%20of%20a%20new%20GeoAI%20foundation%20model%20for%20flood%20inundation%20mapping%3C%5C%2Fa%3E.%202023%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Assessment%20of%20a%20new%20GeoAI%20foundation%20model%20for%20flood%20inundation%20mapping%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenwen%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hyunho%22%2C%22lastName%22%3A%22Lee%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sizhe%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chia-Yu%22%2C%22lastName%22%3A%22Hsu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Samantha%20T.%22%2C%22lastName%22%3A%22Arundel%22%7D%5D%2C%22abstractNote%22%3A%22Vision%20foundation%20models%20are%20a%20new%20frontier%20in%20Geospatial%20Artificial%20Intelligence%20%28GeoAI%29%2C%20an%20interdisciplinary%20research%20area%20that%20applies%20and%20extends%20AI%20for%20geospatial%20problem%20solving%20and%20geographic%20knowledge%20discovery%2C%20because%20of%20their%20potential%20to%20enable%20powerful%20image%20analysis%20by%20learning%20and%20extracting%20important%20image%20features%20from%20vast%20amounts%20of%20geospatial%20data.%20This%20paper%20evaluates%20the%20performance%20of%20the%20first-of-its-kind%20geospatial%20foundation%20model%2C%20IBM-NASA%27s%20Prithvi%2C%20to%20support%20a%20crucial%20geospatial%20analysis%20task%3A%20flood%20inundation%20mapping.%20This%20model%20is%20compared%20with%20convolutional%20neural%20network%20and%20vision%20transformer-based%20architectures%20in%20terms%20of%20mapping%20accuracy%20for%20flooded%20areas.%20A%20benchmark%20dataset%2C%20Sen1Floods11%2C%20is%20used%20in%20the%20experiments%2C%20and%20the%20models%27%20predictability%2C%20generalizability%2C%20and%20transferability%20are%20evaluated%20based%20on%20both%20a%20test%20dataset%20and%20a%20dataset%20that%20is%20completely%20unseen%20by%20the%20model.%20Results%20show%20the%20good%20transferability%20of%20the%20Prithvi%20model%2C%20highlighting%20its%20performance%20advantages%20in%20segmenting%20flooded%20areas%20in%20previously%20unseen%20regions.%20The%20findings%20also%20indicate%20areas%20for%20improvement%20for%20the%20Prithvi%20model%20in%20terms%20of%20adopting%20multi-scale%20representation%20learning%2C%20developing%20more%20end-to-end%20pipelines%20for%20high-level%20image%20analysis%20tasks%2C%20and%20offering%20more%20flexibility%20in%20terms%20of%20input%20data%20bands.%22%2C%22date%22%3A%22November%2020%2C%202023%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%206th%20ACM%20SIGSPATIAL%20International%20Workshop%20on%20AI%20for%20Geographic%20Knowledge%20Discovery%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3615886.3627747%22%2C%22ISBN%22%3A%229798400703485%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3615886.3627747%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-20T16%3A55%3A44Z%22%7D%7D%2C%7B%22key%22%3A%22ZKPT3FEE%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20et%20al.%22%2C%22parsedDate%22%3A%222023-07-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ELi%2C%20W.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs10707-022-00476-z%27%3EGeoImageNet%3A%20a%20multi-source%20natural%20feature%20benchmark%20dataset%20for%20GeoAI%20and%20supervised%20machine%20learning%3C%5C%2Fa%3E.%202023%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22GeoImageNet%3A%20a%20multi-source%20natural%20feature%20benchmark%20dataset%20for%20GeoAI%20and%20supervised%20machine%20learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenwen%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sizhe%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Samantha%20T.%22%2C%22lastName%22%3A%22Arundel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chia-Yu%22%2C%22lastName%22%3A%22Hsu%22%7D%5D%2C%22abstractNote%22%3A%22The%20field%20of%20GeoAI%20or%20Geospatial%20Artificial%20Intelligence%20has%20undergone%20rapid%20development%5Cu00a0since%202017.%20It%20has%20been%20widely%20applied%20to%20address%20environmental%20and%20social%20science%20problems%2C%20from%20understanding%20climate%20change%20to%20tracking%20the%20spread%20of%20infectious%20disease.%20A%20foundational%20task%20in%20advancing%20GeoAI%20research%20is%20the%20creation%20of%20open%2C%20benchmark%20datasets%20to%20train%20and%20evaluate%20the%20performance%20of%20GeoAI%20models.%20While%20a%20number%20of%20datasets%20have%20been%20published%2C%20very%20few%20have%20centered%20on%20the%20natural%20terrain%20and%20its%20landforms.%20To%20bridge%20this%20gulf%2C%20this%20paper%20introduces%20a%20first-of-its-kind%20benchmark%20dataset%2C%20GeoImageNet%2C%20which%20supports%20natural%20feature%20detection%20in%20a%20supervised%20machine-learning%20paradigm.%20A%20distinctive%20feature%20of%20this%20dataset%20is%20the%20fusion%20of%20multi-source%20data%2C%20including%20both%20remote%20sensing%20imagery%20and%20DEM%20in%20depicting%20spatial%20objects%20of%20interest.%20This%20multi-source%20dataset%20allows%20a%20GeoAI%20model%20to%20extract%20rich%20spatio-contextual%20information%20to%20gain%20stronger%20confidence%20in%20high-precision%20object%20detection%20and%20recognition.%20The%20image%20dataset%20is%20tested%20with%20a%20multi-source%20GeoAI%20extension%20against%20two%20well-known%20object%20detection%20models%2C%20Faster-RCNN%20and%20RetinaNet.%20The%20results%20demonstrate%20the%20robustness%20of%20the%20dataset%20in%20aiding%20GeoAI%20models%20to%20achieve%20convergence%20and%20the%20superiority%20of%20multi-source%20data%20in%20yielding%20much%20higher%20prediction%20accuracy%20than%20the%20commonly%20used%20single%20data%20source.%22%2C%22date%22%3A%222023-07-01%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2Fs10707-022-00476-z%22%2C%22ISSN%22%3A%221573-7624%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs10707-022-00476-z%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-20T16%3A57%3A51Z%22%7D%7D%2C%7B%22key%22%3A%22WDK7335J%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Hsu%20and%20Li%22%2C%22parsedDate%22%3A%222023-05-04%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EHsu%2C%20C.-Y.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2023.2191256%27%3EExplainable%20GeoAI%3A%20can%20saliency%20maps%20help%20interpret%20artificial%20intelligence%27s%20learning%20process%3F%20An%20empirical%20study%20on%20natural%20feature%20detection%3C%5C%2Fa%3E.%202023%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Explainable%20GeoAI%3A%20can%20saliency%20maps%20help%20interpret%20artificial%20intelligence%27s%20learning%20process%3F%20An%20empirical%20study%20on%20natural%20feature%20detection%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chia-Yu%22%2C%22lastName%22%3A%22Hsu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenwen%22%2C%22lastName%22%3A%22Li%22%7D%5D%2C%22abstractNote%22%3A%22Improving%20the%20interpretability%20of%20geospatial%20artificial%20intelligence%20%28GeoAI%29%20models%20has%20become%20critically%20important%20to%20open%20the%20%5Cu2018black%20box%5Cu2019%20of%20complex%20AI%20models%2C%20such%20as%20deep%20learning.%20This%20paper%20compares%20popular%20saliency%20map%20generation%20techniques%20and%20their%20strengths%20and%20weaknesses%20in%20interpreting%20GeoAI%20and%20deep%20learning%20models%5Cu2019%20reasoning%20behaviors%2C%20particularly%20when%20applied%20to%20geospatial%20analysis%20and%20image%20processing%20tasks.%20We%20surveyed%20two%20broad%20classes%20of%20model%20explanation%20methods%3A%20perturbation-based%20and%20gradient-based%20methods.%20The%20former%20identifies%20important%20image%20areas%2C%20which%20help%20machines%20make%20predictions%20by%20modifying%20a%20localized%20area%20of%20the%20input%20image.%20The%20latter%20evaluates%20the%20contribution%20of%20every%20single%20pixel%20of%20the%20input%20image%20to%20the%20model%5Cu2019s%20prediction%20results%20through%20gradient%20backpropagation.%20In%20this%20study%2C%20three%20algorithms%5Cu2014the%20occlusion%20method%2C%20the%20integrated%20gradients%20method%2C%20and%20the%20class%20activation%20map%20method%5Cu2014are%20examined%20for%20a%20natural%20feature%20detection%20task%20using%20deep%20learning.%20The%20algorithms%5Cu2019%20strengths%20and%20weaknesses%20are%20discussed%2C%20and%20the%20consistency%20between%20model-learned%20and%20human-understandable%20concepts%20for%20object%20recognition%20is%20also%20compared.%20The%20experiments%20used%20two%20GeoAI-ready%20datasets%20to%20demonstrate%20the%20generalizability%20of%20the%20research%20findings.%22%2C%22date%22%3A%222023-05-04%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2023.2191256%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2023.2191256%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-20T16%3A59%3A17Z%22%7D%7D%2C%7B%22key%22%3A%22RK43H9IC%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20et%20al.%22%2C%22parsedDate%22%3A%222021-11-10%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ELi%2C%20W.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F24694452.2021.1877527%27%3ETobler%27s%20First%20Law%20in%20GeoAI%3A%20A%20Spatially%20Explicit%20Deep%20Learning%20Model%20for%20Terrain%20Feature%20Detection%20under%20Weak%20Supervision%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Tobler%27s%20First%20Law%20in%20GeoAI%3A%20A%20Spatially%20Explicit%20Deep%20Learning%20Model%20for%20Terrain%20Feature%20Detection%20under%20Weak%20Supervision%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenwen%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chia-Yu%22%2C%22lastName%22%3A%22Hsu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Maosheng%22%2C%22lastName%22%3A%22Hu%22%7D%5D%2C%22abstractNote%22%3A%22Recent%20interest%20in%20geospatial%20artificial%20intelligence%20%28GeoAI%29%20has%20fostered%20a%20wide%20range%20of%20applications%20using%20artificial%20intelligence%20%28AI%29%2C%20especially%20deep%20learning%20for%20geospatial%20problem%20solving.%20Major%20challenges%2C%20however%2C%20such%20as%20a%20lack%20of%20training%20data%20and%20ignorance%20of%20spatial%20principles%20and%20spatial%20effects%20in%20AI%20model%20design%20remain%2C%20significantly%20hindering%20the%20in-depth%20integration%20of%20AI%20with%20geospatial%20research.%20This%20article%20reports%20our%20work%20in%20developing%20a%20cutting-edge%20deep%20learning%20model%20that%20enables%20object%20detection%2C%20especially%20of%20natural%20features%2C%20in%20a%20weakly%20supervised%20manner.%20Our%20work%20has%20made%20three%20innovative%20contributions%3A%20First%2C%20we%20present%20a%20novel%20method%20of%20object%20detection%20using%20only%20weak%20labels.%20This%20is%20achieved%20by%20developing%20a%20spatially%20explicit%20model%20according%20to%20Tobler%5Cu2019s%20first%20law%20of%20geography%20to%20enable%20weakly%20supervised%20object%20detection.%20Second%2C%20we%20integrate%20the%20idea%20of%20an%20attention%20map%20into%20the%20deep%20learning%5Cu2013based%20object%20detection%20pipeline%20and%20develop%20a%20multistage%20training%20strategy%20to%20further%20boost%20detection%20performance.%20Third%2C%20we%20have%20successfully%20applied%20this%20model%20for%20the%20automated%20detection%20of%20Mars%20impact%20craters%2C%20the%20inspection%20of%20which%20often%20involved%20tremendous%20manual%20work%20prior%20to%20our%20solution.%20Our%20model%20is%20generalizable%20for%20detecting%20both%20natural%20and%20man-made%20features%20on%20the%20surface%20of%20the%20Earth%20and%20other%20planets.%20This%20research%20has%20made%20a%20major%20contribution%20to%20the%20enrichment%20of%20the%20theoretical%20and%20methodological%20body%20of%20knowledge%20of%20GeoAI.%22%2C%22date%22%3A%222021-11-10%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F24694452.2021.1877527%22%2C%22ISSN%22%3A%222469-4452%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F24694452.2021.1877527%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-20T17%3A14%3A54Z%22%7D%7D%2C%7B%22key%22%3A%22H864CVQU%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wang%20and%20Li%22%2C%22parsedDate%22%3A%222021-11-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EWang%2C%20S.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0198971521001228%27%3EGeoAI%20in%20terrain%20analysis%3A%20Enabling%20multi-source%20deep%20learning%20and%20data%20fusion%20for%20natural%20feature%20detection%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22GeoAI%20in%20terrain%20analysis%3A%20Enabling%20multi-source%20deep%20learning%20and%20data%20fusion%20for%20natural%20feature%20detection%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sizhe%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenwen%22%2C%22lastName%22%3A%22Li%22%7D%5D%2C%22abstractNote%22%3A%22In%20this%20paper%20we%20report%20on%20a%20new%20GeoAI%20research%20method%20which%20enables%20deep%20machine%20learning%20from%20multi-source%20geospatial%20data%20for%20natural%20feature%20detection.%20In%20particular%2C%20a%20multi-source%2C%20deep%20learning-based%20object%20detection%20pipeline%20was%20developed.%20This%20pipeline%20introduces%20three%20new%20features%3A%20First%2C%20strategies%20of%20both%20data-level%20fusion%20%28i.e.%2C%20channel%20expansion%20on%20convolutional%20neural%20networks%29%20and%20feature-level%20fusion%20were%20integrated%20into%20the%20object%20detection%20model%20to%20allow%20simultaneous%20machine%20learning%20from%20multi-source%20data%2C%20including%20remote%20sensing%20imagery%20and%20Digital%20Elevation%20Model%20%28DEM%29%20data.%20Second%2C%20a%20new%20data%20fusion%20strategy%20was%20developed%20to%20blend%20DEM%20data%20and%20its%20derivatives%20to%20create%20a%20new%2C%20fused%20data%20source%20with%20enriched%20information%20content%20and%20image%20features.%20The%20model%20has%20also%20enabled%20deep%20learning%20by%20combining%20both%20the%20proposed%20data%20fusion%20and%20feature-level%20fusion%20strategies%20to%20yield%20a%20much-improved%20detection%20result.%20Third%2C%20two%20different%20sets%20of%20data%20augmentation%20techniques%20were%20applied%20to%20the%20multi-source%20training%20data%20to%20further%20improve%20the%20model%20performance.%20A%20series%20of%20experiments%20were%20conducted%20to%20verify%20the%20effectiveness%20of%20the%20proposed%20strategies%20in%20multi-source%20deep%20learning.%22%2C%22date%22%3A%222021-11-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.compenvurbsys.2021.101715%22%2C%22ISSN%22%3A%220198-9715%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0198971521001228%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-20T17%3A12%3A53Z%22%7D%7D%2C%7B%22key%22%3A%22JTNBHLJ4%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Hsu%20et%20al.%22%2C%22parsedDate%22%3A%222021-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EHsu%2C%20C.-Y.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2072-4292%5C%2F13%5C%2F11%5C%2F2116%27%3EKnowledge-Driven%20GeoAI%3A%20Integrating%20Spatial%20Knowledge%20into%20Multi-Scale%20Deep%20Learning%20for%20Mars%20Crater%20Detection%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Knowledge-Driven%20GeoAI%3A%20Integrating%20Spatial%20Knowledge%20into%20Multi-Scale%20Deep%20Learning%20for%20Mars%20Crater%20Detection%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chia-Yu%22%2C%22lastName%22%3A%22Hsu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenwen%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sizhe%22%2C%22lastName%22%3A%22Wang%22%7D%5D%2C%22abstractNote%22%3A%22This%20paper%20introduces%20a%20new%20GeoAI%20solution%20to%20support%20automated%20mapping%20of%20global%20craters%20on%20the%20Mars%20surface.%20Traditional%20crater%20detection%20algorithms%20suffer%20from%20the%20limitation%20of%20working%20only%20in%20a%20semiautomated%20or%20multi-stage%20manner%2C%20and%20most%20were%20developed%20to%20handle%20a%20specific%20dataset%20in%20a%20small%20subarea%20of%20Mars%5Cu2019%20surface%2C%20hindering%20their%20transferability%20for%20global%20crater%20detection.%20As%20an%20alternative%2C%20we%20propose%20a%20GeoAI%20solution%20based%20on%20deep%20learning%20to%20tackle%20this%20problem%20effectively.%20Three%20innovative%20features%20are%20integrated%20into%20our%20object%20detection%20pipeline%3A%20%281%29%20a%20feature%20pyramid%20network%20is%20leveraged%20to%20generate%20feature%20maps%20with%20rich%20semantics%20across%20multiple%20object%20scales%3B%20%282%29%20prior%20geospatial%20knowledge%20based%20on%20the%20Hough%20transform%20is%20integrated%20to%20enable%20more%20accurate%20localization%20of%20potential%20craters%3B%20and%20%283%29%20a%20scale-aware%20classifier%20is%20adopted%20to%20increase%20the%20prediction%20accuracy%20of%20both%20large%20and%20small%20crater%20instances.%20The%20results%20show%20that%20the%20proposed%20strategies%20bring%20a%20significant%20increase%20in%20crater%20detection%20performance%20than%20the%20popular%20Faster%20R-CNN%20model.%20The%20integration%20of%20geospatial%20domain%20knowledge%20into%20the%20data-driven%20analytics%20moves%20GeoAI%20research%20up%20to%20the%20next%20level%20to%20enable%20knowledge-driven%20GeoAI.%20This%20research%20can%20be%20applied%20to%20a%20wide%20variety%20of%20object%20detection%20and%20image%20analysis%20tasks.%22%2C%22date%22%3A%222021%5C%2F1%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Frs13112116%22%2C%22ISSN%22%3A%222072-4292%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2072-4292%5C%2F13%5C%2F11%5C%2F2116%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-20T17%3A13%3A39Z%22%7D%7D%2C%7B%22key%22%3A%22DLIEM9PV%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20and%20Hsu%22%2C%22parsedDate%22%3A%222020-04-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ELi%2C%20W.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2018.1542697%27%3EAutomated%20terrain%20feature%20identification%20from%20remote%20sensing%20imagery%3A%20a%20deep%20learning%20approach%3C%5C%2Fa%3E.%202020%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Automated%20terrain%20feature%20identification%20from%20remote%20sensing%20imagery%3A%20a%20deep%20learning%20approach%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenwen%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chia-Yu%22%2C%22lastName%22%3A%22Hsu%22%7D%5D%2C%22abstractNote%22%3A%22Terrain%20feature%20detection%20is%20a%20fundamental%20task%20in%20terrain%20analysis%20and%20landscape%20scene%20interpretation.%20Discovering%20where%20a%20specific%20feature%20%28i.e.%20sand%20dune%2C%20crater%2C%20etc.%29%20is%20located%20and%20how%20it%20evolves%20over%20time%20is%20essential%20for%20understanding%20landform%20processes%20and%20their%20impacts%20on%20the%20environment%2C%20ecosystem%2C%20and%20human%20population.%20Traditional%20induction-based%20approaches%20are%20challenged%20by%20their%20inefficiency%20for%20generalizing%20diverse%20and%20complex%20terrain%20features%20as%20well%20as%20their%20performance%20for%20scalable%20processing%20of%20the%20massive%20geospatial%20data%20available.%20This%20paper%20presents%20a%20new%20deep%20learning%20%28DL%29%20approach%20to%20support%20automatic%20detection%20of%20terrain%20features%20from%20remotely%20sensed%20images.%20The%20novelty%20of%20this%20work%20lies%20in%3A%20%281%29%20a%20terrain%20feature%20database%20containing%2012%2C000%20remotely%20sensed%20images%20%281%2C000%20original%20images%20and%2011%2C000%20derived%20images%20from%20data%20augmentation%29%20that%20supports%20data-driven%20model%20training%20and%20new%20discovery%3B%20%282%29%20a%20DL-based%20object%20detection%20network%20empowered%20by%20ensemble%20learning%20and%20deep%20and%20deeper%20convolutional%20neural%20networks%20to%20achieve%20high-accuracy%20object%20detection%3B%20and%20%283%29%20fine-tuning%20the%20model%5Cu2019s%20characteristics%20and%20behaviors%20to%20identify%20the%20best%20combination%20of%20hyperparameters%20and%20other%20network%20factors.%20The%20introduction%20of%20DL%20into%20geospatial%20applications%20is%20expected%20to%20contribute%20significantly%20to%20intelligent%20terrain%20analysis%2C%20landscape%20scene%20interpretation%2C%20and%20the%20maturation%20of%20spatial%20data%20science.%22%2C%22date%22%3A%222020-04-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2018.1542697%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2018.1542697%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-20T16%3A48%3A34Z%22%7D%7D%2C%7B%22key%22%3A%22API3KTAY%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Mubin%20et%20al.%22%2C%22parsedDate%22%3A%222019-10-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EMubin%2C%20N.A.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F01431161.2019.1569282%27%3EYoung%20and%20mature%20oil%20palm%20tree%20detection%20and%20counting%20using%20convolutional%20neural%20network%20deep%20learning%20method%3C%5C%2Fa%3E.%202019%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Young%20and%20mature%20oil%20palm%20tree%20detection%20and%20counting%20using%20convolutional%20neural%20network%20deep%20learning%20method%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nurulain%20Abd%22%2C%22lastName%22%3A%22Mubin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Eiswary%22%2C%22lastName%22%3A%22Nadarajoo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Helmi%20Zulhaidi%20Mohd%22%2C%22lastName%22%3A%22Shafri%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Alireza%22%2C%22lastName%22%3A%22Hamedianfar%22%7D%5D%2C%22abstractNote%22%3A%22Detection%20and%20counting%20of%20oil%20palm%20are%20important%20in%20oil%20palm%20plantation%20management.%20In%20this%20article%2C%20we%20use%20a%20deep%20learning%20approach%20to%20predict%20and%20count%20oil%20palms%20in%20satellite%20imagery.%20Previous%20oil%20palm%20detections%20commonly%20focus%20on%20detecting%20oil%20palm%20trees%20that%20do%20not%20have%20overlapping%20crowns.%20Besides%20this%2C%20there%20is%20a%20lack%20of%20research%20that%20builds%20separate%20detection%20system%20for%20young%20and%20mature%20oil%20palm%2C%20utilizing%20deep%20learning%20approach%20for%20oil%20palm%20detection%20and%20combining%20geographic%20information%20system%20%28GIS%29%20with%20deep%20learning%20approach.%20This%20research%20attempts%20to%20fill%20this%20gap%20by%20utilizing%20two%20different%20convolution%20neural%20networks%20%28CNNs%29%20to%20detect%20young%20and%20mature%20oil%20palm%20separately%20and%20uses%20GIS%20during%20data%20processing%20and%20result%20storage%20process.%20The%20initial%20architecture%20developed%20is%20based%20on%20a%20CNN%20called%20LeNet.%20The%20training%20process%20reduces%20loss%20using%20adaptive%20gradient%20algorithm%20with%20a%20mini%20batch%20of%20size%2020%20for%20all%20the%20training%20sets%20used.%20Then%2C%20we%20exported%20prediction%20results%20to%20GIS%20software%20and%20created%20oil%20palm%20prediction%20map%20for%20mature%20and%20young%20oil%20palm.%20Based%20on%20the%20proposed%20method%2C%20the%20overall%20accuracies%20for%20young%20and%20mature%20oil%20palm%20are%2095.11%25%20and%2092.96%25%2C%20respectively.%20Overall%2C%20the%20classifier%20performs%20well%20on%20previously%20unseen%20datasets%2C%20and%20is%5Cu00a0able%20to%20accurately%20detect%20oil%20palm%20from%20background%2C%20including%20plant%20shadows%20and%20other%20plants.%22%2C%22date%22%3A%222019-10-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F01431161.2019.1569282%22%2C%22ISSN%22%3A%220143-1161%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F01431161.2019.1569282%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T19%3A30%3A04Z%22%7D%7D%2C%7B%22key%22%3A%22D3BQR8YU%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Sublime%20and%20Kalinicheva%22%2C%22parsedDate%22%3A%222019-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ESublime%2C%20J.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2072-4292%5C%2F11%5C%2F9%5C%2F1123%27%3EAutomatic%20Post-Disaster%20Damage%20Mapping%20Using%20Deep-Learning%20Techniques%20for%20Change%20Detection%3A%20Case%20Study%20of%20the%20Tohoku%20Tsunami%3C%5C%2Fa%3E.%202019%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Automatic%20Post-Disaster%20Damage%20Mapping%20Using%20Deep-Learning%20Techniques%20for%20Change%20Detection%3A%20Case%20Study%20of%20the%20Tohoku%20Tsunami%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22J%5Cu00e9r%5Cu00e9mie%22%2C%22lastName%22%3A%22Sublime%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ekaterina%22%2C%22lastName%22%3A%22Kalinicheva%22%7D%5D%2C%22abstractNote%22%3A%22Post-disaster%20damage%20mapping%20is%20an%20essential%20task%20following%20tragic%20events%20such%20as%20hurricanes%2C%20earthquakes%2C%20and%20tsunamis.%20It%20is%20also%20a%20time-consuming%20and%20risky%20task%20that%20still%20often%20requires%20the%20sending%20of%20experts%20on%20the%20ground%20to%20meticulously%20map%20and%20assess%20the%20damages.%20Presently%2C%20the%20increasing%20number%20of%20remote-sensing%20satellites%20taking%20pictures%20of%20Earth%20on%20a%20regular%20basis%20with%20programs%20such%20as%20Sentinel%2C%20ASTER%2C%20or%20Landsat%20makes%20it%20easy%20to%20acquire%20almost%20in%20real%20time%20images%20from%20areas%20struck%20by%20a%20disaster%20before%20and%20after%20it%20hits.%20While%20the%20manual%20study%20of%20such%20images%20is%20also%20a%20tedious%20task%2C%20progress%20in%20artificial%20intelligence%20and%20in%20particular%20deep-learning%20techniques%20makes%20it%20possible%20to%20analyze%20such%20images%20to%20quickly%20detect%20areas%20that%20have%20been%20flooded%20or%20destroyed.%20From%20there%2C%20it%20is%20possible%20to%20evaluate%20both%20the%20extent%20and%20the%20severity%20of%20the%20damages.%20In%20this%20paper%2C%20we%20present%20a%20state-of-the-art%20deep-learning%20approach%20for%20change%20detection%20applied%20to%20satellite%20images%20taken%20before%20and%20after%20the%20Tohoku%20tsunami%20of%202011.%20We%20compare%20our%20approach%20with%20other%20machine-learning%20methods%20and%20show%20that%20our%20approach%20is%20superior%20to%20existing%20techniques%20due%20to%20its%20unsupervised%20nature%2C%20good%20performance%2C%20and%20relative%20speed%20of%20analysis.%22%2C%22date%22%3A%222019%5C%2F1%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Frs11091123%22%2C%22ISSN%22%3A%222072-4292%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2072-4292%5C%2F11%5C%2F9%5C%2F1123%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-06-26T09%3A16%3A14Z%22%7D%7D%2C%7B%22key%22%3A%22JWIGRUI3%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20et%20al.%22%2C%22parsedDate%22%3A%222017-11-07%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ELi%2C%20W.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3149808.3149814%27%3ERecognizing%20terrain%20features%20on%20terrestrial%20surface%20using%20a%20deep%20learning%20model%3A%20an%20example%20with%20crater%20detection%3C%5C%2Fa%3E.%202017%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Recognizing%20terrain%20features%20on%20terrestrial%20surface%20using%20a%20deep%20learning%20model%3A%20an%20example%20with%20crater%20detection%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenwen%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bin%22%2C%22lastName%22%3A%22Zhou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chia-Yu%22%2C%22lastName%22%3A%22Hsu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yixing%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fengbo%22%2C%22lastName%22%3A%22Ren%22%7D%5D%2C%22abstractNote%22%3A%22This%20paper%20exploits%20the%20use%20of%20a%20popular%20deep%20learning%20model%20-%20the%20faster-RCNN%20-%20to%20support%20automatic%20terrain%20feature%20detection%20and%20classification%20using%20a%20mixed%20set%20of%20optimal%20remote%20sensing%20and%20natural%20images.%20Crater%20detection%20is%20used%20as%20the%20case%20study%20in%20this%20research%20since%20this%20geomorphological%20feature%20provides%20important%20information%20about%20surface%20aging.%20Craters%2C%20such%20as%20impact%20craters%2C%20also%20effect%20global%20changes%20in%20many%20aspects%2C%20such%20as%20geography%2C%20topography%2C%20mineral%20and%20hydrocarbon%20production%2C%20etc.%20The%20collected%20data%20were%20labeled%20and%20the%20network%20was%20trained%20through%20a%20GPU%20server.%20Experimental%20results%20show%20that%20the%20faster-RCNN%20model%20coupled%20with%20a%20widely%20used%20convolutional%20network%20ZF-net%20performs%20well%20in%20detecting%20craters%20on%20the%20terrestrial%20surface.%22%2C%22date%22%3A%22November%207%2C%202017%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%201st%20Workshop%20on%20Artificial%20Intelligence%20and%20Deep%20Learning%20for%20Geographic%20Knowledge%20Discovery%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3149808.3149814%22%2C%22ISBN%22%3A%22978-1-4503-5498-1%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3149808.3149814%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-20T17%3A25%3A33Z%22%7D%7D%5D%7D
Li, W. et al. Assessment of a new GeoAI foundation model for flood inundation mapping. 2023
Cleaning and Conflation
5447768
cleaning, conflation
1
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
Processing Workflows
5447768
processing workflows
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%2265Q36DRP%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20and%20Ning%22%2C%22parsedDate%22%3A%222023-12-08%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ELi%2C%20Z.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F17538947.2023.2278895%27%3EAutonomous%20GIS%3A%20the%20next-generation%20AI-powered%20GIS%3C%5C%2Fa%3E.%202023%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Autonomous%20GIS%3A%20the%20next-generation%20AI-powered%20GIS%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhenlong%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Huan%22%2C%22lastName%22%3A%22Ning%22%7D%5D%2C%22abstractNote%22%3A%22Large%20Language%20Models%20%28LLMs%29%2C%20such%20as%20ChatGPT%2C%20demonstrate%20a%20strong%20understanding%20of%20human%20natural%20language%20and%20have%20been%20explored%20and%20applied%20in%20various%20fields%2C%20including%20reasoning%2C%20creative%20writing%2C%20code%20generation%2C%20translation%2C%20and%20information%20retrieval.%20By%20adopting%20LLM%20as%20the%20reasoning%20core%2C%20we%20introduce%20Autonomous%20GIS%20as%20an%20AI-powered%20geographic%20information%20system%20%28GIS%29%20that%20leverages%20the%20LLM%27s%20general%20abilities%20in%20natural%20language%20understanding%2C%20reasoning%2C%20and%20coding%20for%20addressing%20spatial%20problems%20with%20automatic%20spatial%20data%20collection%2C%20analysis%2C%20and%20visualization.%20We%20envision%20that%20autonomous%20GIS%20will%20need%20to%20achieve%20five%20autonomous%20goals%3A%20self-generating%2C%20self-organizing%2C%20self-verifying%2C%20self-executing%2C%20and%20self-growing.%20We%20developed%20a%20prototype%20system%20called%20LLM-Geo%20using%20the%20GPT-4%20API%2C%20demonstrating%20what%20an%20autonomous%20GIS%20looks%20like%20and%20how%20it%20delivers%20expected%20results%20without%20human%20intervention%20using%20three%20case%20studies.%20For%20all%20case%20studies%2C%20LLM-Geo%20returned%20accurate%20results%2C%20including%20aggregated%20numbers%2C%20graphs%2C%20and%20maps..%20Although%20still%20in%20its%20infancy%20and%20lacking%20several%20important%20modules%20such%20as%20logging%20and%20code%20testing%2C%20LLM-Geo%20demonstrates%20a%20potential%20path%20toward%20the%20next-generation%20AI-powered%20GIS.%20We%20advocate%20for%20the%20GIScience%20community%20to%20devote%20more%20efforts%20to%20the%20research%20and%20development%20of%20autonomous%20GIS%2C%20making%20spatial%20analysis%20easier%2C%20faster%2C%20and%20more%20accessible%20to%20a%20broader%20audience.%22%2C%22date%22%3A%222023-12-08%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F17538947.2023.2278895%22%2C%22ISSN%22%3A%221753-8947%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F17538947.2023.2278895%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T18%3A41%3A44Z%22%7D%7D%5D%7D
Li, Z. et al. Autonomous GIS: the next-generation AI-powered GIS. 2023
Record Linkage (Addresses)
5447768
record linkage, addresses
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22IEVR3XT2%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20et%20al.%22%2C%22parsedDate%22%3A%222022-06-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ELi%2C%20F.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs00521-022-06914-1%27%3EMulti-task%20deep%20learning%20model%20based%20on%20hierarchical%20relations%20of%20address%20elements%20for%20semantic%20address%20matching%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Multi-task%20deep%20learning%20model%20based%20on%20hierarchical%20relations%20of%20address%20elements%20for%20semantic%20address%20matching%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fangfang%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yiheng%22%2C%22lastName%22%3A%22Lu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xingliang%22%2C%22lastName%22%3A%22Mao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Junwen%22%2C%22lastName%22%3A%22Duan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiyao%22%2C%22lastName%22%3A%22Liu%22%7D%5D%2C%22abstractNote%22%3A%22Address%20matching%2C%20which%20aims%20to%20match%20unstructured%20addresses%20with%20standard%20addresses%20in%20an%20address%20database%2C%20is%20a%20key%20part%20of%20geocoding.%20The%20core%20problem%20of%20address%20matching%20corresponds%20to%20text%20matching%20in%20natural%20language%20processing.%20Existing%20rule-based%20methods%20require%20human-designed%20templates%20and%20thus%2C%20have%20limited%20applicability.%20Machine%20learning%20and%20deep%20learning-based%20methods%20ignore%20the%20hierarchical%20relations%20between%20address%20elements%2C%20which%20easily%20misclassify%20semantically%20similar%20but%20geographically%20different%20locations.%20We%20note%20that%20the%20hierarchy%20of%20address%20elements%20can%20fill%20the%20semantic%20gap%20in%20address%20matching.%20Inspired%20by%20how%20humans%20discriminate%20addresses%2C%20we%20propose%20a%20multi-task%20learning%20approach.%20The%20approach%20jointly%20recognises%20the%20address%20elements%20and%20matches%20the%20addresses%20to%20incorporate%20the%20hierarchical%20relations%20between%20the%20address%20elements%20into%20the%20neural%20network.%20Simultaneously%2C%20we%20introduce%20a%20priori%20information%20on%20the%20hierarchical%20relationship%20of%20address%20elements%20through%20the%20conditional%20random%20field%20model.%20Experimental%20results%20on%20the%20benchmark%20datasets%20Shenzhen%20Address%20Database%20and%20Jiangsu-Hunan%20Address%20Dataset%20demonstrate%20the%20effectiveness%20of%20our%20approach.%20We%20achieved%20state-of-the-art%20F1%20scores%20%28i.e.%20the%20harmonic%20mean%20of%20precision%20and%20recall%29%20of%2099.0%20and%2094.2%20on%20the%20two%20datasets%2C%20respectively.%22%2C%22date%22%3A%222022-06-01%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2Fs00521-022-06914-1%22%2C%22ISSN%22%3A%221433-3058%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs00521-022-06914-1%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A06%3A11Z%22%7D%7D%2C%7B%22key%22%3A%22HC5DQ5RI%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Xu%20et%20al.%22%2C%22parsedDate%22%3A%222022-01%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EXu%2C%20L.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2076-3417%5C%2F12%5C%2F19%5C%2F10110%27%3EDeep%20Transfer%20Learning%20Model%20for%20Semantic%20Address%20Matching%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep%20Transfer%20Learning%20Model%20for%20Semantic%20Address%20Matching%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Liuchang%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ruichen%22%2C%22lastName%22%3A%22Mao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chengkun%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yuanyuan%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xinyu%22%2C%22lastName%22%3A%22Zheng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xingyu%22%2C%22lastName%22%3A%22Xue%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fang%22%2C%22lastName%22%3A%22Xia%22%7D%5D%2C%22abstractNote%22%3A%22Address%20matching%2C%20which%20aims%20to%20match%20an%20input%20descriptive%20address%20with%20a%20standard%20address%20in%20an%20address%20database%2C%20is%20a%20key%20technology%20for%20achieving%20data%20spatialization.%20The%20construction%20of%20today%5Cu2019s%20smart%20cities%20depends%20heavily%20on%20the%20precise%20matching%20of%20Chinese%20addresses.%20Existing%20methods%20that%20rely%20on%20rules%20or%20text%20similarity%20struggle%20when%20dealing%20with%20nonstandard%20address%20data.%20Deep-learning-based%20methods%20often%20require%20extracting%20address%20semantics%20for%20embedded%20representation%2C%20which%20not%20only%20complicates%20the%20matching%20process%2C%20but%20also%20affects%20the%20understanding%20of%20address%20semantics.%20Inspired%20by%20deep%20transfer%20learning%2C%20we%20introduce%20an%20address%20matching%20approach%20based%20on%20a%20pretraining%20fine-tuning%20model%20to%20identify%20semantic%20similarities%20between%20various%20addresses.%20We%20first%20pretrain%20the%20address%20corpus%20to%20enable%20the%20address%20semantic%20model%20%28abbreviated%20as%20ASM%29%20to%20learn%20address%20contexts%20unsupervised.%20We%20then%20build%20a%20labelled%20address%20matching%20dataset%20using%20an%20address-specific%20geographical%20feature%2C%20allowing%20the%20matching%20problem%20to%20be%20converted%20into%20a%20binary%20classification%20prediction%20problem.%20Finally%2C%20we%20fine-tune%20the%20ASM%20using%20the%20address%20matching%20dataset%20and%20compare%20the%20output%20with%20several%20popular%20address%20matching%20methods.%20The%20results%20demonstrate%20that%20our%20model%20achieves%20the%20best%20performance%2C%20with%20precision%2C%20recall%2C%20and%20an%20F1%20score%20above%200.98.%22%2C%22date%22%3A%222022%5C%2F1%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fapp121910110%22%2C%22ISSN%22%3A%222076-3417%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2076-3417%5C%2F12%5C%2F19%5C%2F10110%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A48%3A17Z%22%7D%7D%2C%7B%22key%22%3A%22FQRGE2TM%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Cheng%20and%20Chen%22%2C%22parsedDate%22%3A%222021-09-01%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ECheng%2C%20R.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0952197621002487%27%3EA%20location%20conversion%20method%20for%20roads%20through%20deep%20learning-based%20semantic%20matching%20and%20simplified%20qualitative%20direction%20knowledge%20representation%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20location%20conversion%20method%20for%20roads%20through%20deep%20learning-based%20semantic%20matching%20and%20simplified%20qualitative%20direction%20knowledge%20representation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ruozhen%22%2C%22lastName%22%3A%22Cheng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jing%22%2C%22lastName%22%3A%22Chen%22%7D%5D%2C%22abstractNote%22%3A%22Qualitative%20direction%20knowledge%20that%20appears%20in%20natural%20language%20descriptions%20of%20road-related%20locations%20could%20point%20to%20the%20interior%20of%20individual%20roads%20or%20associate%20multiple%20roads.%20Interpreting%20such%20descriptions%20to%20perform%20location%20conversion%20for%20roads%20will%20support%20intelligent%20road-related%20location%20services.%20Existing%20geocoding%20technologies%20could%20perform%20textual%20or%20semantic%20matching%20to%20transform%20road%20names%20to%20spatial%20locations%2C%20and%20research%20on%20qualitative%20direction%20reasoning%20could%20perform%20efficient%20location%20conversion%20based%20on%20semantic%20queries%20of%20qualitative%20direction%20knowledge%20between%20roads.%20However%2C%20research%20on%20geocoding%20lacks%20the%20consideration%20of%20matching%20the%20described%20internal%20direction%20knowledge%20of%20a%20road%20to%20a%20part%20of%20the%20road.%20Moreover%2C%20efficient%20location%20conversion%20based%20on%20semantic%20queries%20cannot%20scale%20to%20large%20road%20datasets%20due%20to%20the%20retrieval%20efficiency%20of%20a%20large%20amount%20of%20qualitative%20direction%20knowledge%20between%20roads.%20To%20accomplish%20this%20goal%2C%20this%20study%20proposes%20a%20location%20conversion%20method%20for%20roads%2C%20wherein%20a%20road%20ontology%20is%20designed%20to%20model%20the%20interior%20direction%20knowledge%20of%20the%20roads%2C%20a%20deep%20learning-based%20road%20semantic%20matching%20model%20is%20trained%20to%20match%20the%20internal%20direction%20knowledge%20descriptions%20and%20road%20segments%2C%20and%20a%20simplified%20qualitative%20direction%20knowledge%20representation%20between%20roads%20is%20performed%20to%20support%20rapid%20location%20conversion%20between%20roads%20based%20on%20efficient%20semantic%20queries.%20The%20proposed%20method%20was%20implemented%20on%20a%20road%20dataset%20of%20New%20York%20State.%20The%20results%20demonstrate%20that%20the%20proposed%20method%20can%20be%20effectively%20applied%20in%20road%20location%20conversion%20based%20on%20descriptions%20that%20contain%20qualitative%20direction%20knowledge%20inside%20individual%20roads%20or%20between%20multiple%20roads%2C%20which%20expands%20the%20scope%20of%20artificial%20intelligence%20applications.%22%2C%22date%22%3A%222021-09-01%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.engappai.2021.104400%22%2C%22ISSN%22%3A%220952-1976%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0952197621002487%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A01%3A30Z%22%7D%7D%2C%7B%22key%22%3A%224PB9X3X3%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Park%20et%20al.%22%2C%22parsedDate%22%3A%222021-03-22%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EPark%2C%20S.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3412841.3441969%27%3EBertLoc%3A%20duplicate%20location%20record%20detection%20in%20a%20large-scale%20location%20dataset%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22BertLoc%3A%20duplicate%20location%20record%20detection%20in%20a%20large-scale%20location%20dataset%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sujin%22%2C%22lastName%22%3A%22Park%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sangwon%22%2C%22lastName%22%3A%22Lee%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Simon%20S.%22%2C%22lastName%22%3A%22Woo%22%7D%5D%2C%22abstractNote%22%3A%22Due%20to%20a%20significant%20increase%20in%20the%20number%20of%20location%20services%20as%20well%20as%20services%20which%20rely%20on%20location%20information%20such%20as%20real-time%20maps%2C%20there%20is%20an%20enormous%20need%20to%20provide%20accurate%20location%20information%20to%20end%20users.%20In%20order%20to%20acquire%20the%20location%20records%2C%20generally%2C%20users%20or%20other%20systems%20initiate%20the%20location%20query%20to%20the%20location%20search%20engine%2C%20and%20the%20location%20search%20engine%20provides%20the%20best%20matching%20results.%20However%2C%20there%20are%20often%20inconsistency%2C%20noise%2C%20and%20ambiguity%20in%20the%20location%20datasets.%20In%20particular%2C%20there%20are%20many%20cases%20where%20the%20same%20location%20is%20recorded%20as%20different%20names%20from%20varying%20data%20sources%2C%20which%20can%20not%20only%20confuse%20users%2C%20but%20also%20introduce%20inaccurate%20results.%20Therefore%2C%20detecting%20the%20duplicate%20location%20information%20in%20a%20large%20database%20as%20well%20as%20accurately%20merging%20them%20into%20a%20single%20location%20record%20are%20critical.%20In%20this%20work%2C%20we%20propose%20BertLoc%2C%20a%20novel%20deep%20learning-based%20architecture%20to%20detect%20the%20duplicate%20location%20represented%20in%20different%20ways%20%28e.g.%2C%20Cafe%20vs.%20Coffee%20House%29%20and%20effectively%20merge%20them%20into%20a%20single%20and%20consistent%20location%20record.%20BertLoc%20is%20based%20on%20Multilingual%20Bert%20Model%20followed%20by%20BiLSTM%20and%20CNN%20to%20effectively%20compare%20and%20determine%20whether%20given%20location%20strings%20are%20the%20same%20location%20or%20not.%20We%20evaluate%20BertLoc%20trained%20with%20more%20than%20half%20a%20million%20location%20data%20used%20in%20real%20service%20in%20South%20Korea%20and%20compare%20the%20results%20with%20other%20popular%20baseline%20methods.%20Our%20experimental%20results%20show%20that%20BertLoc%20outperforms%20other%20popular%20baseline%20methods%20with%200.952%20F1-score%2C%20and%20shows%20great%20promise%20in%20detecting%20duplicate%20records%20in%20a%20large-scale%20location%20dataset.%22%2C%22date%22%3A%22March%2022%2C%202021%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%2036th%20Annual%20ACM%20Symposium%20on%20Applied%20Computing%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3412841.3441969%22%2C%22ISBN%22%3A%22978-1-4503-8104-8%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3412841.3441969%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A32%3A14Z%22%7D%7D%2C%7B%22key%22%3A%22XWNGKIEF%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Chen%20et%20al.%22%2C%22parsedDate%22%3A%222021-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EChen%2C%20J.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2076-3417%5C%2F11%5C%2F16%5C%2F7608%27%3EDeep%20Contrast%20Learning%20Approach%20for%20Address%20Semantic%20Matching%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep%20Contrast%20Learning%20Approach%20for%20Address%20Semantic%20Matching%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jian%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jianpeng%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiangrong%22%2C%22lastName%22%3A%22She%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jian%22%2C%22lastName%22%3A%22Mao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gang%22%2C%22lastName%22%3A%22Chen%22%7D%5D%2C%22abstractNote%22%3A%22Address%20is%20a%20structured%20description%20used%20to%20identify%20a%20specific%20place%20or%20point%20of%20interest%2C%20and%20it%20provides%20an%20effective%20way%20to%20locate%20people%20or%20objects.%20The%20standardization%20of%20Chinese%20place%20name%20and%20address%20occupies%20an%20important%20position%20in%20the%20construction%20of%20a%20smart%20city.%20Traditional%20address%20specification%20technology%20often%20adopts%20methods%20based%20on%20text%20similarity%20or%20rule%20bases%2C%20which%20cannot%20handle%20complex%2C%20missing%2C%20and%20redundant%20address%20information%20well.%20This%20paper%20transforms%20the%20task%20of%20address%20standardization%20into%20calculating%20the%20similarity%20of%20address%20pairs%2C%20and%20proposes%20a%20contrast%20learning%20address%20matching%20model%20based%20on%20the%20attention-Bi-LSTM-CNN%20network%20%28ABLC%29.%20First%20of%20all%2C%20ABLC%20use%20the%20Trie%20syntax%20tree%20algorithm%20to%20extract%20Chinese%20address%20elements.%20Next%2C%20based%20on%20the%20basic%20idea%20of%20contrast%20learning%2C%20a%20hybrid%20neural%20network%20is%20applied%20to%20learn%20the%20semantic%20information%20in%20the%20address.%20Finally%2C%20Manhattan%20distance%20is%20calculated%20as%20the%20similarity%20of%20the%20two%20addresses.%20Experiments%20on%20the%20self-constructed%20dataset%20with%20data%20augmentation%20demonstrate%20that%20the%20proposed%20model%20has%20better%20stability%20and%20performance%20compared%20with%20other%20baselines.%22%2C%22date%22%3A%222021%5C%2F1%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fapp11167608%22%2C%22ISSN%22%3A%222076-3417%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2076-3417%5C%2F11%5C%2F16%5C%2F7608%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A42%3A58Z%22%7D%7D%2C%7B%22key%22%3A%22LL5RE6FD%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Lin%20et%20al.%22%2C%22parsedDate%22%3A%222020-03-03%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ELin%2C%20Y.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2019.1681431%27%3EA%20deep%20learning%20architecture%20for%20semantic%20address%20matching%3C%5C%2Fa%3E.%202020%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20deep%20learning%20architecture%20for%20semantic%20address%20matching%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yue%22%2C%22lastName%22%3A%22Lin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mengjun%22%2C%22lastName%22%3A%22Kang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yuyang%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qingyun%22%2C%22lastName%22%3A%22Du%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tao%22%2C%22lastName%22%3A%22Liu%22%7D%5D%2C%22abstractNote%22%3A%22Address%20matching%20is%20a%20crucial%20step%20in%20geocoding%2C%20which%20plays%20an%20important%20role%20in%20urban%20planning%20and%20management.%20To%20date%2C%20the%20unprecedented%20development%20of%20location-based%20services%20has%20generated%20a%20large%20amount%20of%20unstructured%20address%20data.%20Traditional%20address%20matching%20methods%20mainly%20focus%20on%20the%20literal%20similarity%20of%20address%20records%20and%20are%20therefore%20not%20applicable%20to%20the%20unstructured%20address%20data.%20In%20this%20study%2C%20we%20introduce%20an%20address%20matching%20method%20based%20on%20deep%20learning%20to%20identify%20the%20semantic%20similarity%20between%20address%20records.%20First%2C%20we%20train%20the%20word2vec%20model%20to%20transform%20the%20address%20records%20into%20their%20corresponding%20vector%20representations.%20Next%2C%20we%20apply%20the%20enhanced%20sequential%20inference%20model%20%28ESIM%29%2C%20a%20deep%20text-matching%20model%2C%20to%20make%20local%20and%20global%20inferences%20to%20determine%20if%20two%20addresses%20match.%20To%20evaluate%20the%20accuracy%20of%20the%20proposed%20method%2C%20we%20fine-tune%20the%20model%20with%20real-world%20address%20data%20from%20the%20Shenzhen%20Address%20Database%20and%20compare%20the%20outputs%20with%20those%20of%20several%20popular%20address%20matching%20methods.%20The%20results%20indicate%20that%20the%20proposed%20method%20achieves%20a%20higher%20matching%20accuracy%20for%20unstructured%20address%20records%2C%20with%20its%20precision%2C%20recall%2C%20and%20F1%20score%20%28i.e.%2C%20the%20harmonic%20mean%20of%20precision%20and%20recall%29%20reaching%200.97%20on%20the%20test%20set.%22%2C%22date%22%3A%222020-03-03%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2019.1681431%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2019.1681431%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A08%3A08Z%22%7D%7D%5D%7D
Xu, L. et al. Deep Transfer Learning Model for Semantic Address Matching. 2022
Park, S. et al. BertLoc: duplicate location record detection in a large-scale location dataset. 2021
Chen, J. et al. Deep Contrast Learning Approach for Address Semantic Matching. 2021
Lin, Y. et al. A deep learning architecture for semantic address matching. 2020
Record Linkage (Toponyms)
5447768
record linkage, toponyms
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%2292V866CJ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Fize%20et%20al.%22%2C%22parsedDate%22%3A%222021-12%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EFize%2C%20J.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F10%5C%2F12%5C%2F818%27%3EDeep%20Learning%20for%20Toponym%20Resolution%3A%20Geocoding%20Based%20on%20Pairs%20of%20Toponyms%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep%20Learning%20for%20Toponym%20Resolution%3A%20Geocoding%20Based%20on%20Pairs%20of%20Toponyms%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jacques%22%2C%22lastName%22%3A%22Fize%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ludovic%22%2C%22lastName%22%3A%22Moncla%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bruno%22%2C%22lastName%22%3A%22Martins%22%7D%5D%2C%22abstractNote%22%3A%22Geocoding%20aims%20to%20assign%20unambiguous%20locations%20%28i.e.%2C%20geographic%20coordinates%29%20to%20place%20names%20%28i.e.%2C%20toponyms%29%20referenced%20within%20documents%20%28e.g.%2C%20within%20spreadsheet%20tables%20or%20textual%20paragraphs%29.%20This%20task%20comes%20with%20multiple%20challenges%2C%20such%20as%20dealing%20with%20referent%20ambiguity%20%28multiple%20places%20with%20a%20same%20name%29%20or%20reference%20database%20completeness.%20In%20this%20work%2C%20we%20propose%20a%20geocoding%20approach%20based%20on%20modeling%20pairs%20of%20toponyms%2C%20which%20returns%20latitude-longitude%20coordinates.%20One%20of%20the%20input%20toponyms%20will%20be%20geocoded%2C%20and%20the%20second%20one%20is%20used%20as%20context%20to%20reduce%20ambiguities.%20The%20proposed%20approach%20is%20based%20on%20a%20deep%20neural%20network%20that%20uses%20Long%20Short-Term%20Memory%20%28LSTM%29%20units%20to%20produce%20representations%20from%20sequences%20of%20character%20n-grams.%20To%20train%20our%20model%2C%20we%20use%20toponym%20co-occurrences%20collected%20from%20different%20contexts%2C%20namely%20textual%20%28i.e.%2C%20co-occurrences%20of%20toponyms%20in%20Wikipedia%20articles%29%20and%20geographical%20%28i.e.%2C%20inclusion%20and%20proximity%20of%20places%20based%20on%20Geonames%20data%29.%20Experiments%20based%20on%20multiple%20geographical%20areas%20of%20interest%5Cu2014France%2C%20United%20States%2C%20Great-Britain%2C%20Nigeria%2C%20Argentina%20and%20Japan%5Cu2014were%20conducted.%20Results%20show%20that%20models%20trained%20with%20co-occurrence%20data%20obtained%20a%20higher%20geocoding%20accuracy%2C%20and%20that%20proximity%20relations%20in%20combination%20with%20co-occurrences%20can%20help%20to%20obtain%20a%20slightly%20higher%20accuracy%20in%20geographical%20areas%20with%20fewer%20places%20in%20the%20data%20sources.%22%2C%22date%22%3A%222021%5C%2F12%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi10120818%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F10%5C%2F12%5C%2F818%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A56%3A28Z%22%7D%7D%2C%7B%22key%22%3A%22LL8VE6JY%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Alexis%20et%20al.%22%2C%22parsedDate%22%3A%222020-06-14%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EAlexis%2C%20K.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3403896.3403970%27%3EBoosting%20toponym%20interlinking%20by%20paying%20attention%20to%20both%20machine%20and%20deep%20learning%3C%5C%2Fa%3E.%202020%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Boosting%20toponym%20interlinking%20by%20paying%20attention%20to%20both%20machine%20and%20deep%20learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Konstantinos%22%2C%22lastName%22%3A%22Alexis%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Vassilis%22%2C%22lastName%22%3A%22Kaffes%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Giorgos%22%2C%22lastName%22%3A%22Giannopoulos%22%7D%5D%2C%22abstractNote%22%3A%22Toponym%20interlinking%20is%20the%20problem%20of%20identifying%20same%20spatio-textual%20entities%20within%20two%20or%20more%20different%20data%20sources%2C%20based%20exclusively%20on%20their%20names.%20It%20comprises%20a%20significant%20task%20in%20geospatial%20data%20management%20and%20integration%20with%20application%20in%20fields%20such%20as%20geomarketing%2C%20cadastration%2C%20navigation%2C%20etc.%20Previous%20works%20have%20assessed%20the%20effectiveness%20of%20unsupervised%20string%20similarity%20functions%2C%20while%20more%20recent%20ones%20have%20deployed%20similarity-based%20Machine%20Learning%20techniques%20and%20language%20model-based%20Deep%20Learning%20techniques%2C%20achieving%20significantly%20higher%20interlinking%20accuracy.%20In%20this%20paper%2C%20we%20demonstrate%20the%20suitability%20of%20Attentionbased%20neural%20networks%20on%20the%20problem%2C%20as%20well%20as%20the%20fact%20that%20all%20different%20approaches%20provide%20merit%20to%20the%20problem%2C%20proposing%20a%20hybrid%20scheme%20that%20achieves%20the%20highest%20accuracy%20reported%20on%20toponym%20interlinking%20on%20the%20widely%20used%20Geonames%20dataset.%22%2C%22date%22%3A%222020-06-14%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%20Sixth%20International%20ACM%20SIGMOD%20Workshop%20on%20Managing%20and%20Mining%20Enriched%20Geo-Spatial%20Data%22%2C%22conferenceName%22%3A%22SIGMOD%5C%2FPODS%20%2720%3A%20International%20Conference%20on%20Management%20of%20Data%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1145%5C%2F3403896.3403970%22%2C%22ISBN%22%3A%22978-1-4503-8035-5%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3403896.3403970%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A58%3A02Z%22%7D%7D%2C%7B%22key%22%3A%22QLPTJUNI%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Santos%20et%20al.%22%2C%22parsedDate%22%3A%222018-02-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ESantos%2C%20R.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2017.1390119%27%3EToponym%20matching%20through%20deep%20neural%20networks%3C%5C%2Fa%3E.%202018%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Toponym%20matching%20through%20deep%20neural%20networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rui%22%2C%22lastName%22%3A%22Santos%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Patricia%22%2C%22lastName%22%3A%22Murrieta-Flores%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22P%5Cu00e1vel%22%2C%22lastName%22%3A%22Calado%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bruno%22%2C%22lastName%22%3A%22Martins%22%7D%5D%2C%22abstractNote%22%3A%22Toponym%20matching%2C%20i.e.%20pairing%20strings%20that%20represent%20the%20same%20real-world%20location%2C%20is%20a%20fundamental%20problemfor%20several%20practical%20applications.%20The%20current%20state-of-the-art%20relies%20on%20string%20similarity%20metrics%2C%20either%20specifically%20developed%20for%20matching%20place%20names%20or%20integrated%20within%20methods%20that%20combine%20multiple%20metrics.%20However%2C%20these%20methods%20all%20rely%20on%20common%20sub-strings%20in%20order%20to%20establish%20similarity%2C%20and%20they%20do%20not%20effectively%20capture%20the%20character%20replacements%20involved%20in%20toponym%20changes%20due%20to%20transliterations%20or%20to%20changes%20in%20language%20and%20culture%20over%20time.%20In%20this%20article%2C%20we%20present%20a%20novel%20matching%20approach%2C%20leveraging%20a%20deep%20neural%20network%20to%20classify%20pairs%20of%20toponyms%20as%20either%20matching%20or%20nonmatching.%20The%20proposed%20network%20architecture%20uses%20recurrent%20nodes%20to%20build%20representations%20from%20the%20sequences%20of%20bytes%20that%20correspond%20to%20the%20strings%20that%20are%20to%20be%20matched.%20These%20representations%20are%20then%20combined%20and%20passed%20to%20feed-forward%20nodes%2C%20finally%20leading%20to%20a%20classification%20decision.%20We%20present%20the%20results%20of%20a%20wide-ranging%20evaluation%20on%20the%20performance%20of%20the%20proposed%20method%2C%20using%20a%20large%20dataset%20collected%20from%20the%20GeoNames%20gazetteer.%20These%20results%20show%20that%20the%20proposed%20method%20can%20significantly%20outperform%20individual%20similarity%20metrics%20from%20previous%20studies%2C%20as%20well%20as%20previous%20methods%20based%20on%20supervised%20machine%20learning%20for%20combining%20multiple%20metrics.%22%2C%22date%22%3A%222018-02-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2017.1390119%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2017.1390119%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A40%3A32Z%22%7D%7D%5D%7D
Fize, J. et al. Deep Learning for Toponym Resolution: Geocoding Based on Pairs of Toponyms. 2021
Alexis, K. et al. Boosting toponym interlinking by paying attention to both machine and deep learning. 2020
Santos, R. et al. Toponym matching through deep neural networks. 2018
Data Structures
5447768
data structures
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22DL7YR7NV%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhang%20et%20al.%22%2C%22parsedDate%22%3A%222021-11-08%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EZhang%2C%20Z.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3486640.3491393%27%3EAn%20Al-based%20Spatial%20Knowledge%20Graph%20for%20Enhancing%20Spatial%20Data%20and%20Knowledge%20Search%20and%20Discovery%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22An%20Al-based%20Spatial%20Knowledge%20Graph%20for%20Enhancing%20Spatial%20Data%20and%20Knowledge%20Search%20and%20Discovery%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhe%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhangyang%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Angela%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xinyue%22%2C%22lastName%22%3A%22Ye%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22E.%20Lynn%22%2C%22lastName%22%3A%22Usery%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Diya%22%2C%22lastName%22%3A%22Li%22%7D%5D%2C%22abstractNote%22%3A%22Geospatial%20data%20has%20been%20widely%20used%20in%20Geographic%20Information%20Systems%20to%20understand%20spatial%20relationships%20in%20various%20application%20domains%20such%20as%20disaster%20response%2C%20agriculture%20risk%20management%2C%20environmental%20planning%2C%20and%20water%20resource%20protection.%20Many%20data%20sharing%20platforms%20such%20as%20NASA%20Open%20Data%20Portal%20and%20USGS%20Geo%20Data%20portal%20have%20been%20developed%20to%20enhance%20spatial%20data%20sharing%20services.%20However%2C%20enabling%20intelligent%20and%20efficient%20spatial%20data%20sharing%20and%20communication%20across%20different%20domains%20and%20stakeholders%20%28e.g.%2C%20data%20producers%2C%20researchers%2C%20and%20domain%20experts%29%20is%20a%20formidable%20task.%20The%20challenges%20appear%20in%20building%20meaningful%20semantics%20between%20data%20products%20using%20spatiotemporal%20similarity%20measures%20to%20efficiently%20help%20users%20find%20all%20the%20relevant%20data%20and%20information%20at%20the%20space-time%20scale.%20In%20this%20paper%2C%20we%20developed%20a%20novel%20AI-based%20graph%20embedding%20algorithm%20to%20build%20semantic%20relationships%20between%20different%20spatial%20data%20sets%20to%20enable%20efficient%20and%20accurate%20data%20search.%20We%20applied%20the%20graph%20embedding%20algorithm%20to%2030%2C000%20NASA%20metadata%20records%20to%20test%20our%20algorithm%27s%20performance.%20In%20the%20end%2C%20we%20visualized%20the%20knowledge%20graph%20using%20the%20Neo4j%20database%20graphical%20user%20interface.%22%2C%22date%22%3A%22November%208%2C%202021%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%201st%20ACM%20SIGSPATIAL%20International%20Workshop%20on%20Searching%20and%20Mining%20Large%20Collections%20of%20Geospatial%20Data%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3486640.3491393%22%2C%22ISBN%22%3A%22978-1-4503-9123-8%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3486640.3491393%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T17%3A53%3A50Z%22%7D%7D%5D%7D
Wayfinding and Routing
5447768
wayfinding
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22UXPT8UK7%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Hei%20et%20al.%22%2C%22parsedDate%22%3A%222023-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EHei%2C%20Q.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2022.2154271%27%3EDetecting%20dynamic%20visual%20attention%20in%20augmented%20reality%20aided%20navigation%20environment%20based%20on%20a%20multi-feature%20integration%20fully%20convolutional%20network%3C%5C%2Fa%3E.%202023%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Detecting%20dynamic%20visual%20attention%20in%20augmented%20reality%20aided%20navigation%20environment%20based%20on%20a%20multi-feature%20integration%20fully%20convolutional%20network%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qiaosong%22%2C%22lastName%22%3A%22Hei%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weihua%22%2C%22lastName%22%3A%22Dong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bowen%22%2C%22lastName%22%3A%22Shi%22%7D%5D%2C%22abstractNote%22%3A%22Visual%20attention%20detection%2C%20as%20an%20important%20concept%20for%20human%20visual%20behavior%20research%2C%20has%20been%20widely%20studied.%20However%2C%20previous%20studies%20seldom%20considered%20the%20feature%20integration%20mechanism%20to%20detect%20visual%20attention%20and%20rarely%20considered%20the%20differences%20due%20to%20different%20geographical%20scenes.%20In%20this%20paper%2C%20we%20use%20an%20augmented%20reality%20aided%20%28AR-aided%29%20navigation%20experimental%20dataset%20to%20study%20human%20visual%20behavior%20in%20a%20dynamic%20AR-aided%20environment.%20Then%2C%20we%20propose%20a%20multi-feature%20integration%20fully%20convolutional%20network%20%28M-FCN%29%20based%20on%20a%20self-adaptive%20environment%20weight%20%28SEW%29%20to%20integrate%20RGB-D%2C%20semantic%2C%20optical%20flow%20and%20spatial%20neighborhood%20features%20to%20detect%20human%20visual%20attention.%20The%20result%20shows%20that%20the%20M-FCN%20performs%20better%20than%20other%20state-of-the-art%20saliency%20models.%20In%20addition%2C%20the%20introduction%20of%20feature%20integration%20mechanism%20and%20the%20SEW%20can%20improve%20the%20accuracy%20and%20robustness%20of%20visual%20attention%20detection.%20Meanwhile%2C%20we%20find%20that%20RGB-D%20and%20semantic%20features%20perform%20best%20in%20different%20road%20routes%20and%20road%20types%2C%20but%20with%20the%20increase%20in%20road%20type%20complexity%2C%20the%20expressiveness%20of%20these%20two%20features%20weakens%2C%20and%20the%20expressiveness%20of%20optical%20flow%20and%20spatial%20neighborhood%20features%20increases.%20The%20research%20is%20helpful%20for%20AR-device%20navigation%20tool%20design%20and%20urban%20spatial%20planning.%22%2C%22date%22%3A%222023-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2022.2154271%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2022.2154271%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-18T13%3A29%3A02Z%22%7D%7D%2C%7B%22key%22%3A%22HXQYGACQ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Liu%20et%20al.%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ELiu%2C%20Z.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9896986%27%3EDeepGPS%20%3A%20Deep%20Learning%20Enhanced%20GPS%20Positioning%20in%20Urban%20Canyons%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22DeepGPS%20%3A%20Deep%20Learning%20Enhanced%20GPS%20Positioning%20in%20Urban%20Canyons%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhidan%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiancong%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiaowen%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kaishun%22%2C%22lastName%22%3A%22Wu%22%7D%5D%2C%22abstractNote%22%3A%22Global%20Positioning%20System%20%28GPS%29%20has%20benefited%20many%20novel%20applications%2C%20e.g.%2C%20navigation%2C%20ride-sharing%2C%20and%20location-based%20services%2C%20in%20our%20daily%20life.%20Although%20GPS%20works%20well%20in%20most%20places%2C%20its%20performance%20in%20urban%20canyons%20is%20well-known%20poor%2C%20due%20to%20the%20signal%20reflections%20of%20non-line-of-sight%20%28NLOS%29%20satellites.%20Tremendous%20efforts%20have%20been%20made%20to%20mitigate%20the%20impacts%20of%20NLOS%20signals%2C%20while%20previous%20works%20heavily%20rely%20on%20precise%20proprietary%203D%20city%20models%20or%20other%20third-party%20resources%2C%20which%20are%20not%20easily%20accessible.%20In%20this%20paper%2C%20we%20present%20DeepGPS%20%2C%20a%20deep%20learning%20enhanced%20GPS%20positioning%20system%20that%20can%20correct%20GPS%20estimations%20by%20only%20considering%20some%20simple%20contextual%20information.%20DeepGPS%20fuses%20environmental%20factors%2C%20including%20building%20heights%20and%20road%20distribution%20around%20GPS%27s%20initial%20position%2C%20and%20satellite%20statuses%20to%20describe%20the%20positioning%20context%2C%20and%20exploits%20an%20encoder-decoder%20network%20model%20to%20implicitly%20learn%20the%20complex%20relationships%20between%20positioning%20contexts%20and%20GPS%20estimations%20from%20massive%20labeled%20GPS%20samples.%20As%20a%20result%2C%20the%20well-trained%20model%20can%20accurately%20predict%20the%20correct%20position%20for%20each%20erroneous%20GPS%20estimation%20given%20its%20positioning%20context.%20We%20further%20improve%20the%20model%20with%20a%20novel%20constraint%20mask%20to%20filter%20out%20invalid%20candidate%20locations%2C%20and%20enable%20continuous%20localization%20with%20a%20simple%20mobility%20model.%20A%20prototype%20system%20is%20implemented%20and%20experimentally%20evaluated%20using%20a%20large-scale%20bus%20trajectory%20dataset%20and%20real-field%20GPS%20measurements.%20Experimental%20results%20demonstrate%20that%20DeepGPS%20significantly%20enhances%20GPS%20performance%20in%20urban%20canyons%2C%20e.g.%2C%20on%20average%20effectively%20correcting%2090.1%25%20GPS%20estimations%20with%20accuracy%20improvement%20by%2064.6%25.%22%2C%22date%22%3A%222022%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FTMC.2022.3208240%22%2C%22ISSN%22%3A%221558-0660%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9896986%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A24%3A32Z%22%7D%7D%5D%7D
Liu, Z. et al. DeepGPS : Deep Learning Enhanced GPS Positioning in Urban Canyons. 2022
Recommender Systems
5447768
recommender systems
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%229N4R3BYV%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Pramanik%20et%20al.%22%2C%22parsedDate%22%3A%222020-11%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EPramanik%2C%20S.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fabstract%5C%2Fdocument%5C%2F8709774%27%3EDeep%20Learning%20Driven%20Venue%20Recommender%20for%20Event-Based%20Social%20Networks%3C%5C%2Fa%3E.%202020%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep%20Learning%20Driven%20Venue%20Recommender%20for%20Event-Based%20Social%20Networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Soumajit%22%2C%22lastName%22%3A%22Pramanik%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rajarshi%22%2C%22lastName%22%3A%22Haldar%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Anand%22%2C%22lastName%22%3A%22Kumar%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sayan%22%2C%22lastName%22%3A%22Pathak%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bivas%22%2C%22lastName%22%3A%22Mitra%22%7D%5D%2C%22abstractNote%22%3A%22Event-based%20online%20social%20platforms%2C%20such%20as%20Meetup%20and%20Plancast%2C%20have%20experienced%20increased%20popularity%20and%20rapid%20growth%20in%20recent%20years.%20In%20EBSN%20setup%2C%20selecting%20suitable%20venues%20for%20hosting%20events%2C%20which%20can%20attract%20a%20great%20turnout%2C%20is%20a%20key%20challenge.%20In%20this%20paper%2C%20we%20present%20a%20deep%20learning%20based%20venue%20recommendation%20system%20DeepVenue%20which%20provides%20context%20driven%20venue%20recommendations%20for%20the%20Meetup%20event-hosts%20to%20host%20their%20events.%20The%20crux%20of%20the%20proposed%20model%20relies%20on%20the%20notion%20of%20similarity%20between%20multiple%20Meetup%20entities%20such%20as%20events%2C%20venues%2C%20groups%2C%20etc.%20We%20develop%20deep%20learning%20techniques%20to%20compute%20a%20compact%20descriptor%20for%20each%20entity%2C%20such%20that%20two%20entities%20%28say%2C%20venues%29%20can%20be%20compared%20numerically.%20Notably%2C%20to%20mitigate%20the%20scarcity%20of%20venue%20related%20information%20in%20Meetup%2C%20we%20leverage%20on%20the%20cross%20domain%20knowledge%20transfer%20from%20popular%20LBSN%20service%20Yelp%20to%20extract%20rich%20venue%20related%20content.%20For%20hosting%20an%20event%2C%20the%20proposed%20DeepVenue%20model%20computes%20a%20success%20score%20for%20each%20candidate%20venue%20and%20ranks%20those%20venues%20according%20to%20the%20scores%20and%20finally%20recommend%20the%20top%20k%20venues.%20Our%20rigorous%20evaluation%20on%20the%20Meetup%20data%20collected%20for%20the%20city%20of%20Chicago%20shows%20that%20DeepVenue%20significantly%20outperforms%20the%20baselines%20algorithms.%20Precisely%2C%20for%2084%20percent%20of%20events%2C%20the%20correct%20hosting%20venue%20appears%20in%20the%20top%205%20of%20the%20DeepVenue%20recommended%20list.%22%2C%22date%22%3A%222020-11%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FTKDE.2019.2915523%22%2C%22ISSN%22%3A%221558-2191%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fabstract%5C%2Fdocument%5C%2F8709774%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T18%3A13%3A54Z%22%7D%7D%5D%7D
Pramanik, S. et al. Deep Learning Driven Venue Recommender for Event-Based Social Networks. 2020
Risk Prevention
5447768
risk prevention
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22SLUEIIBE%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Kim%20et%20al.%22%2C%22parsedDate%22%3A%222022-06-24%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EKim%2C%20J.-M.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fnhess.copernicus.org%5C%2Farticles%5C%2F22%5C%2F2131%5C%2F2022%5C%2F%27%3EStrategic%20framework%20for%20natural%20disaster%20risk%20mitigation%20using%20deep%20learning%20and%20cost-benefit%20analysis%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Strategic%20framework%20for%20natural%20disaster%20risk%20mitigation%20using%20deep%20learning%20and%20cost-benefit%20analysis%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ji-Myong%22%2C%22lastName%22%3A%22Kim%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sang-Guk%22%2C%22lastName%22%3A%22Yum%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hyunsoung%22%2C%22lastName%22%3A%22Park%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Junseo%22%2C%22lastName%22%3A%22Bae%22%7D%5D%2C%22abstractNote%22%3A%22Given%20trends%20in%20more%20frequent%20and%20severe%20natural%20disaster%20events%2C%20developing%20effective%20risk%20mitigation%20strategies%20is%20crucial%20to%20reduce%20negative%20economic%20impacts%2C%20due%20to%20the%20limited%20budget%20for%20rehabilitation.%20To%20address%20this%20need%2C%20this%20study%20aims%20to%20develop%20a%20strategic%20framework%20for%20natural%20disaster%20risk%20mitigation%2C%20highlighting%20two%20different%20strategic%20implementation%20processes%20%28SIPs%29.%20SIP-1%20is%20intended%20to%20improve%20the%20predictability%20of%20natural%20disaster-triggered%20financial%20losses%20using%20deep%20learning.%20To%20demonstrate%20SIP-1%2C%20SIP-1%20explores%20deep%20neural%20networks%20%28DNNs%29%20that%20learn%20storm%20and%20flood%20insurance%20loss%20ratios%20associated%20with%20selected%20major%20indicators%20and%20then%20develops%20an%20optimal%20DNN%20model.%20SIP-2%20underlines%20the%20risk%20mitigation%20strategy%20at%20the%20project%20level%2C%20by%20adopting%20a%20cost%5Cu2013benefit%20analysis%20method%20that%20quantifies%20the%20cost%20effectiveness%20of%20disaster%20prevention%20projects.%20In%20SIP-2%2C%20a%20case%20study%20of%20disaster%20risk%20reservoir%20projects%20in%20South%20Korea%20was%20adopted.%20The%20validated%20result%20of%20SIP-1%20confirmed%20that%20the%20predictability%20of%20the%20developed%20DNN%20is%20more%20accurate%20and%20reliable%20than%20a%20traditional%20parametric%20model%2C%20while%20SIP-2%20revealed%20that%20maintenance%20projects%20are%20economically%20more%20beneficial%20in%20the%20long%20term%20as%20the%20loss%20amount%20becomes%20smaller%20after%208%20years%2C%20coupled%20with%20the%20investment%20in%20the%20projects.%20The%20proposed%20framework%20is%20unique%20as%20it%20provides%20a%20combinational%20approach%20to%20mitigating%20economic%20damages%20caused%20by%20natural%20disasters%20at%20both%20financial%20loss%20and%20project%20levels.%20This%20study%20is%20its%20first%20kind%20and%20will%20help%20practitioners%20quantify%20the%20loss%20from%20natural%20disasters%2C%20while%20allowing%20them%20to%20evaluate%20the%20cost%20effectiveness%20of%20risk%20reduction%20projects%20through%20a%20holistic%20approach.%22%2C%22date%22%3A%222022-06-24%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.5194%5C%2Fnhess-22-2131-2022%22%2C%22ISSN%22%3A%221561-8633%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fnhess.copernicus.org%5C%2Farticles%5C%2F22%5C%2F2131%5C%2F2022%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T18%3A21%3A42Z%22%7D%7D%2C%7B%22key%22%3A%229B5XFMD6%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Kang%20and%20Choo%22%2C%22parsedDate%22%3A%222016-06-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EKang%2C%20B.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS2405959516300169%27%3EA%20deep-learning-based%20emergency%20alert%20system%3C%5C%2Fa%3E.%202016%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20deep-learning-based%20emergency%20alert%20system%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Byungseok%22%2C%22lastName%22%3A%22Kang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hyunseung%22%2C%22lastName%22%3A%22Choo%22%7D%5D%2C%22abstractNote%22%3A%22Emergency%20alert%20systems%20serve%20as%20a%20critical%20link%20in%20the%20chain%20of%20crisis%20communication%2C%20and%20they%20are%20essential%20to%20minimize%20loss%20during%20emergencies.%20Acts%20of%20terrorism%20and%20violence%2C%20chemical%20spills%2C%20amber%20alerts%2C%20nuclear%20facility%20problems%2C%20weather-related%20emergencies%2C%20flu%20pandemics%2C%20and%20other%20emergencies%20all%20require%20those%20responsible%20such%20as%20government%20officials%2C%20building%20managers%2C%20and%20university%20administrators%20to%20be%20able%20to%20quickly%20and%20reliably%20distribute%20emergency%20information%20to%20the%20public.%20This%20paper%20presents%20our%20design%20of%20a%20deep-learning-based%20emergency%20warning%20system.%20The%20proposed%20system%20is%20considered%20suitable%20for%20application%20in%20existing%20infrastructure%20such%20as%20closed-circuit%20television%20and%20other%20monitoring%20devices.%20The%20experimental%20results%20show%20that%20in%20most%20cases%2C%20our%20system%20immediately%20detects%20emergencies%20such%20as%20car%20accidents%20and%20natural%20disasters.%22%2C%22date%22%3A%222016-06-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.icte.2016.05.001%22%2C%22ISSN%22%3A%222405-9595%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS2405959516300169%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-06-26T09%3A13%3A36Z%22%7D%7D%5D%7D
Kang, B. et al. A deep-learning-based emergency alert system. 2016
Modeling and Simulations (Physical Geography)
5447768
physical geography
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%226XPRLF49%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Estacio%20et%20al.%22%2C%22parsedDate%22%3A%222024-12-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EEstacio%2C%20I.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1195103624000120%27%3EPredicting%20the%20future%20through%20observations%20of%20the%20past%3A%20Concretizing%20the%20role%20of%20Geosimulation%20for%20holistic%20geospatial%20knowledge%3C%5C%2Fa%3E.%202024%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Predicting%20the%20future%20through%20observations%20of%20the%20past%3A%20Concretizing%20the%20role%20of%20Geosimulation%20for%20holistic%20geospatial%20knowledge%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ian%22%2C%22lastName%22%3A%22Estacio%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chris%22%2C%22lastName%22%3A%22Lim%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kenichiro%22%2C%22lastName%22%3A%22Onitsuka%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Satoshi%22%2C%22lastName%22%3A%22Hoshino%22%7D%5D%2C%22abstractNote%22%3A%22Geomatics%20can%20be%20generally%20defined%20as%20the%20knowledge%20and%20ability%20of%20utilizing%20geospatial%20data%20for%20analyzing%20and%20forecasting%20the%20state%20of%20the%20environment%20to%20inform%20environmental%20management.%20However%2C%20current%20applications%20of%20Geomatics%20only%20span%20from%20data%20acquisition%20to%20spatial%20analysis%20and%20exclude%20the%20capabilities%20of%20Geosimulation.%20To%20concretize%20the%20role%20of%20Geosimulation%20in%20Geomatics%20for%20obtaining%20geospatial%20knowledge%2C%20we%20write%20this%20paper%20with%20two%20main%20objectives.%20First%2C%20we%20establish%20the%20Geomatics%20framework%2C%20a%20set%20of%20tasks%20utilizing%20geospatial%20data%20that%20aims%20to%20provide%20holistic%20geospatial%20knowledge%20of%20the%20environment.%20This%20set%20of%20tasks%20are%20specifically%20composed%20of%20data%20acquisition%2C%20spatial%20analysis%2C%20and%20Geosimulation.%20This%20proposed%20framework%20also%20brings%20forward%20our%20second%20objective%20which%20is%20to%20present%20Geomatics%20as%20an%20approach%20for%20holistically%20informing%20environmental%20management%20by%20predicting%20the%20future%20through%20observations%20of%20the%20past.%20To%20provide%20sample%20applications%20of%20the%20Geomatics%20framework%20for%20obtaining%20holistic%20geospatial%20knowledge%2C%20we%20provide%20three%20case%20studies%20of%20research%20projects%20that%20followed%20the%20Geomatics%20framework%20for%20informing%20environmental%20management%20actions.%20As%20Geomatics%20can%20play%20a%20major%20role%20in%20addressing%20the%20effects%20of%20climate%20change%2C%20we%20also%20presented%20a%20future%20template%20for%20the%20application%20of%20the%20Geomatics%20framework%20for%20mitigating%20and%20adapting%20to%20the%20effects%20of%20climate%20change.%20We%20anticipate%20three%20implications%20of%20adopting%20this%20Geomatics%20framework%3A%20the%20widening%20of%20the%20environmental%20application%20of%20Geomatics%2C%20the%20establishment%20of%20a%20methodological%20workflow%20for%20informing%20environmental%20management%2C%20and%20the%20enhancement%20of%20the%20collaboration%20between%20Geosimulation%20and%20other%20spatial%20science%20fields.%20We%20conclude%20the%20paper%20by%20advocating%20the%20adoption%20of%20this%20framework%20as%20we%20posit%20that%20this%20new%20perspective%20in%20Geomatics%20will%20also%20strengthen%20the%20teaching%20of%20the%20environmental%20applications%20of%20geospatial%20knowledge.%22%2C%22date%22%3A%222024-12-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.geomat.2024.100012%22%2C%22ISSN%22%3A%221195-1036%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1195103624000120%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-11-15T18%3A41%3A40Z%22%7D%7D%2C%7B%22key%22%3A%22AZFFXGYZ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Roy%20et%20al.%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ERoy%2C%20A.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1111%5C%2F2041-210X.13853%27%3EUsing%20generative%20adversarial%20networks%20%28GAN%29%20to%20simulate%20central-place%20foraging%20trajectories%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Using%20generative%20adversarial%20networks%20%28GAN%29%20to%20simulate%20central-place%20foraging%20trajectories%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Am%5Cu00e9d%5Cu00e9e%22%2C%22lastName%22%3A%22Roy%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ronan%22%2C%22lastName%22%3A%22Fablet%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sophie%20Lanco%22%2C%22lastName%22%3A%22Bertrand%22%7D%5D%2C%22abstractNote%22%3A%22Miniature%20electronic%20devices%20have%20recently%20enabled%20ecologists%20to%20document%20relatively%20large%20amounts%20of%20animal%20trajectories.%20Modelling%20such%20trajectories%20may%20contribute%20to%20explaining%20the%20mechanisms%20underlying%20observed%20behaviours%20and%20to%20clarifying%20ecological%20processes%20at%20the%20scale%20of%20the%20population%20by%20simulating%20multiple%20trajectories.%20Existing%20approaches%20to%20animal%20movement%20modelling%20have%20mainly%20addressed%20the%20first%20objective%2C%20and%20are%20often%20limited%20when%20used%20for%20simulation%20purposes.%20Individual-based%20models%20generally%20rely%20on%20ad%20hoc%20formulation%20and%20their%20empirical%20parametrization%20lacks%20generability%2C%20while%20random%20walks%20based%20on%20mathematically%20sound%20statistical%20inference%20typically%20consist%20of%20first-order%20Markovian%20models%20calibrated%20at%20the%20local%20scale%20which%20may%20lead%20to%20overly%20simplistic%20description%20and%20simulation%20of%20animal%20trajectories.%20We%20investigate%20a%20recent%20deep%20learning%20tool%5Cu2014generative%20adversarial%20networks%20%28GAN%29%5Cu2014to%20simulate%20animal%20trajectories.%20GANs%20consist%20of%20a%20pair%20of%20deep%20neural%20networks%20that%20aim%20to%20capture%20the%20data%20distribution%20of%20some%20experimental%20dataset.%20They%20enable%20the%20generation%20of%20new%20instances%20of%20data%20that%20share%20statistical%20properties.%20This%20study%20aims%20at%20identifying%20relevant%20deep%20network%20architectures%20to%20simulate%20central-place%20foraging%20trajectories%2C%20as%20well%20as%20at%20evaluating%20GANs%20drawbacks%20and%20benefits%20over%20classical%20methods%2C%20such%20as%20state-switching%20hidden%20Markov%20models%20%28HMM%29.%20We%20demonstrate%20the%20outstanding%20ability%20of%20deep%20convolutional%20GANs%20to%20simulate%20and%20to%20capture%20medium-%20to%20large-scale%20properties%20of%20seabird%20foraging%20trajectories.%20GAN-derived%20synthetic%20trajectories%20reproduced%20the%20Fourier%20spectral%20density%20of%20observed%20trajectories%20better%20than%20those%20simulated%20using%20HMMs.%20However%2C%20unlike%20HMMs%2C%20GANs%20do%20not%20adequately%20capture%20local-scale%20descriptive%20statistics%2C%20such%20as%20step%20speed%20distributions.%20GANs%20provide%20a%20new%20likelihood-free%20approach%20to%20calibrate%20complex%20stochastic%20processes%20and%20thus%20open%20new%20research%20avenues%20for%20animal%20movement%20modelling.%20We%20discuss%20the%20potential%20uses%20of%20GANs%20in%20movement%20ecology%20and%20future%20developments%20to%20better%20capture%20local-scale%20features.%20In%20this%20context%2C%20embedding%20HMM-based%20priors%20in%20GAN%20schemes%20appears%20as%20a%20promising%20research%20direction.%22%2C%22date%22%3A%222022%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1111%5C%2F2041-210X.13853%22%2C%22ISSN%22%3A%222041-210X%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1111%5C%2F2041-210X.13853%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T19%3A44%3A55Z%22%7D%7D%5D%7D
Modeling and Simulations (Human Geography)
5447768
human geography
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%228GWRHBFL%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Alastal%20and%20Shaqfa%22%2C%22parsedDate%22%3A%222022-03-31%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EAlastal%2C%20A.I.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.scirp.org%5C%2Fjournal%5C%2Fpaperinformation%3Fpaperid%3D116308%27%3EGeoAI%20Technologies%20and%20Their%20Application%20Areas%20in%20Urban%20Planning%20and%20Development%3A%20Concepts%2C%20Opportunities%20and%20Challenges%20in%20Smart%20City%20%28Kuwait%2C%20Study%20Case%29%3C%5C%2Fa%3E.%202022%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22GeoAI%20Technologies%20and%20Their%20Application%20Areas%20in%20Urban%20Planning%20and%20Development%3A%20Concepts%2C%20Opportunities%20and%20Challenges%20in%20Smart%20City%20%28Kuwait%2C%20Study%20Case%29%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Abdelkhalek%20I.%22%2C%22lastName%22%3A%22Alastal%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ashraf%20Hassan%22%2C%22lastName%22%3A%22Shaqfa%22%7D%5D%2C%22abstractNote%22%3A%22Artificial%20intelligence%20has%20significantly%20altered%20many%20job%20workflows%2C%20hence%20expanding%20earlier%20notions%20of%20limitations%2C%20outcomes%2C%20size%2C%20and%20prices.%20GeoAI%20is%20a%20multidisciplinary%20field%20that%20encompasses%20computer%20science%2C%20engineering%2C%20statistics%2C%20and%20spatial%20science.%20Because%20this%20subject%20focuses%20on%20real-world%20issues%2C%20it%20has%20a%20significant%20impact%20on%20society%20and%20the%20economy.%20A%20broad%20context%20incorporating%20fundamental%20questions%20of%20theory%2C%20epistemology%2C%20and%20the%20scientific%20method%20is%20used%20to%20bring%20artificial%20intelligence%20%28Al%29%20and%20geography%20together.%20This%20connection%20has%20the%20potential%20to%20have%20far-reaching%20implications%20for%20the%20geographic%20study.%20GeoAI%2C%20or%20the%20combination%20of%20geography%20with%20artificial%20intelligence%2C%20offers%20unique%20solutions%20to%20a%20variety%20of%20smart%20city%20issues.%20This%20paper%20provides%20an%20overview%20of%20GeoAI%20technology%2C%20including%20the%20definition%20of%20GeoAI%20and%20the%20differences%20between%20GeoAI%20and%20traditional%20AI.%20Key%20steps%20to%20successful%20geographic%20data%20analysis%20include%20integrating%20AI%20with%20GIS%20and%20using%20GeoAI%20tools%20and%20technologies.%20Also%20shown%20are%20key%20areas%20of%20applications%20and%20models%20in%20GeoAI%2C%20likewise%20challenges%20to%20adopt%20GeoAI%20methods%20and%20technology%20as%20well%20as%20benefits.%20This%20article%20also%20included%20a%20case%20study%20on%20the%20use%20of%20GeoAI%20in%20Kuwait%2C%20as%20well%20as%20a%20number%20of%20recommendations.%22%2C%22date%22%3A%222022-03-31%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.4236%5C%2Fjdaip.2022.102007%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.scirp.org%5C%2Fjournal%5C%2Fpaperinformation%3Fpaperid%3D116308%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-11-16T17%3A40%3A39Z%22%7D%7D%2C%7B%22key%22%3A%22BSAT4JE5%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Boulila%20et%20al.%22%2C%22parsedDate%22%3A%222021-09-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EBoulila%2C%20W.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1574954121001163%27%3EA%20novel%20CNN-LSTM-based%20approach%20to%20predict%20urban%20expansion%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20novel%20CNN-LSTM-based%20approach%20to%20predict%20urban%20expansion%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wadii%22%2C%22lastName%22%3A%22Boulila%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hamza%22%2C%22lastName%22%3A%22Ghandorh%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mehshan%20Ahmed%22%2C%22lastName%22%3A%22Khan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fawad%22%2C%22lastName%22%3A%22Ahmed%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jawad%22%2C%22lastName%22%3A%22Ahmad%22%7D%5D%2C%22abstractNote%22%3A%22Time-series%20remote%20sensing%20data%20offer%20a%20rich%20source%20of%20information%20that%20can%20be%20used%20in%20a%20wide%20range%20of%20applications%2C%20from%20monitoring%20changes%20in%20land%20cover%20to%20surveillance%20of%20crops%2C%20coastal%20changes%2C%20flood%20risk%20assessment%2C%20and%20urban%20sprawl.%20In%20this%20paper%2C%20time-series%20satellite%20images%20are%20used%20to%20predict%20urban%20expansion.%20As%20the%20ground%20truth%20is%20not%20available%20in%20time-series%20satellite%20images%2C%20an%20unsupervised%20image%20segmentation%20method%20based%20on%20deep%20learning%20is%20used%20to%20generate%20the%20ground%20truth%20for%20training%20and%20validation.%20The%20automated%20annotated%20images%20are%20then%20manually%20validated%20using%20Google%20Maps%20to%20generate%20the%20ground%20truth.%20The%20remaining%20data%20were%20then%20manually%20annotated.%20Prediction%20of%20urban%20expansion%20is%20achieved%20by%20using%20a%20ConvLSTM%20network%2C%20which%20can%20learn%20the%20global%20spatio-temporal%20information%20without%20shrinking%20the%20size%20of%20spatial%20feature%20maps.%20The%20ConvLSTM%20based%20model%20is%20applied%20on%20the%20time-series%20satellite%20images%20and%20the%20results%20of%20prediction%20are%20compared%20with%20Pix2pix%20and%20Dual%20GAN%20networks.%20In%20this%20paper%2C%20experimental%20results%20are%20conducted%20using%20several%20multi-date%20satellite%20images%20representing%20the%20three%20largest%20cities%20in%20Saudi%20Arabia%2C%20namely%3A%20Riyadh%2C%20Jeddah%2C%20and%20Dammam.%20The%20evaluation%20results%20show%20that%20the%20proposed%20ConvLSTM%20based%20model%20produced%20better%20prediction%20results%20in%20terms%20of%20Mean%20Square%20Error%2C%20Root%20Mean%20Square%20Error%2C%20Peak%20Signal%20to%20Noise%20Ratio%2C%20Structural%20Similarity%20Index%2C%20and%20overall%20classification%20accuracy%20as%20compared%20to%20Pix2pix%20and%20Dual%20GAN.%20Moreover%2C%20the%20training%20time%20of%20the%20proposed%20architecture%20is%20less%20than%20the%20Dual%20GAN%20architecture.%22%2C%22date%22%3A%222021-09-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.ecoinf.2021.101325%22%2C%22ISSN%22%3A%221574-9541%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1574954121001163%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T19%3A44%3A20Z%22%7D%7D%2C%7B%22key%22%3A%22B5ALRG53%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wu%20et%20al.%22%2C%22parsedDate%22%3A%222021%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EWu%2C%20Y.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1029%5C%2F2021GL094737%27%3EDeep%20Learning-Based%20Super-Resolution%20Climate%20Simulator-Emulator%20Framework%20for%20Urban%20Heat%20Studies%3C%5C%2Fa%3E.%202021%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep%20Learning-Based%20Super-Resolution%20Climate%20Simulator-Emulator%20Framework%20for%20Urban%20Heat%20Studies%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yuankai%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bernardo%22%2C%22lastName%22%3A%22Teufel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laxmi%22%2C%22lastName%22%3A%22Sushama%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stephane%22%2C%22lastName%22%3A%22Belair%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lijun%22%2C%22lastName%22%3A%22Sun%22%7D%5D%2C%22abstractNote%22%3A%22This%20proof-of-concept%20study%20couples%20machine%20learning%20and%20physical%20modeling%20paradigms%20to%20develop%20a%20computationally%20efficient%20simulator-emulator%20framework%20for%20generating%20super-resolution%20%28%3C250%20m%29%20urban%20climate%20information%2C%20that%20is%20required%20by%20many%20sectors.%20To%20this%20end%2C%20a%20regional%20climate%20model%5C%2Fsimulator%20is%20applied%20over%20the%20city%20of%20Montreal%2C%20for%20the%20summers%20of%202019%20and%202020%2C%20at%202.5%20km%20%28LR%29%20and%20250%20m%20%28HR%29%20resolutions%2C%20which%20are%20used%20to%20train%20and%20validate%20the%20proposed%20super-resolution%20deep%20learning%20%28DL%29%20model%5C%2Femulator.%20The%20DL%20model%20uses%20an%20efficient%20sub-pixel%20convolution%20layer%20to%20generate%20HR%20information%20from%20LR%20data%2C%20with%20adversarial%20training%20applied%20to%20improve%20physical%20consistency.%20The%20DL%20model%20reduces%20temperature%20errors%20significantly%20over%20urbanized%20areas%20present%20in%20the%20LR%20simulation%2C%20while%20also%20demonstrating%20considerable%20skill%20in%20capturing%20the%20magnitude%20and%20location%20of%20heat%20stress%20indicators.%20These%20results%20portray%20the%20value%20of%20the%20innovative%20simulator-emulator%20framework%2C%20that%20can%20be%20extended%20to%20other%20seasons%5C%2Fperiods%2C%20variables%20and%20regions.%22%2C%22date%22%3A%222021%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1029%5C%2F2021GL094737%22%2C%22ISSN%22%3A%221944-8007%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1029%5C%2F2021GL094737%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T19%3A44%3A28Z%22%7D%7D%5D%7D
Boulila, W. et al. A novel CNN-LSTM-based approach to predict urban expansion. 2021
Ethics
5447768
ethics
1
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22BND59HKF%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Marasinghe%20et%20al.%22%2C%22parsedDate%22%3A%222024-06-26%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EMarasinghe%2C%20R.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs41651-024-00184-2%27%3ETowards%20Responsible%20Urban%20Geospatial%20AI%3A%20Insights%20From%20the%20White%20and%20Grey%20Literatures%3C%5C%2Fa%3E.%202024%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Towards%20Responsible%20Urban%20Geospatial%20AI%3A%20Insights%20From%20the%20White%20and%20Grey%20Literatures%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Raveena%22%2C%22lastName%22%3A%22Marasinghe%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tan%22%2C%22lastName%22%3A%22Yigitcanlar%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Severine%22%2C%22lastName%22%3A%22Mayere%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tracy%22%2C%22lastName%22%3A%22Washington%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mark%22%2C%22lastName%22%3A%22Limb%22%7D%5D%2C%22abstractNote%22%3A%22Artificial%20intelligence%20%28AI%29%20has%20increasingly%20been%20integrated%20into%20various%20domains%2C%20significantly%20impacting%20geospatial%20applications.%20Machine%20learning%20%28ML%29%20and%20computer%20vision%20%28CV%29%20are%20critical%20in%20urban%20decision-making.%20However%2C%20urban%20AI%20implementation%20faces%20unique%20challenges.%20Academic%20literature%20on%20responsible%20AI%20largely%20focuses%20on%20general%20principles%2C%20with%20limited%20emphasis%20on%20the%20geospatial%20domain.%20This%20important%20gap%20in%20scholarly%20work%20could%20hinder%20effective%20AI%20integration%20in%20urban%20geospatial%20applications.%20Our%20study%20employs%20a%20multi-method%20approach%2C%20including%20a%20systematic%20academic%20literature%20review%2C%20word%20frequency%20analysis%20and%20insights%20from%20grey%20literature%2C%20to%20examine%20potential%20challenges%20and%20propose%20strategies%20for%20effective%20geospatial%20AI%20%28GeoAI%29%20integration.%20We%20identify%20a%20range%20of%20responsible%20practices%20relevant%20to%20the%20complexities%20of%20using%20AI%20in%20urban%20geospatial%20planning%20and%20its%20effective%20implementation.%20The%20review%20provides%20a%20comprehensive%20and%20actionable%20framework%20for%20responsible%20AI%20adoption%20in%20the%20geospatial%20domain%2C%20offering%20a%20roadmap%20for%20urban%20researchers%20and%20practitioners.%20It%20highlights%20ways%20to%20optimise%20AI%20benefits%20while%20minimising%20potential%20negative%20consequences%2C%20contributing%20to%20urban%20sustainability%20and%20equity.%22%2C%22date%22%3A%222024-06-26%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2Fs41651-024-00184-2%22%2C%22ISSN%22%3A%222509-8829%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs41651-024-00184-2%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-11-15T18%3A44%3A13Z%22%7D%7D%2C%7B%22key%22%3A%226A6ZJC4D%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Kang%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-16%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EKang%2C%20Y.%20et%20al.%20%3Ca%20class%3D%27zp-ItemURL%27%20target%3D%27_blank%27%20href%3D%27https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F15230406.2023.2295943%27%3EArtificial%20intelligence%20studies%20in%20cartography%3A%20a%20review%20and%20synthesis%20of%20methods%2C%20applications%2C%20and%20ethics%3C%5C%2Fa%3E.%202024%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Artificial%20intelligence%20studies%20in%20cartography%3A%20a%20review%20and%20synthesis%20of%20methods%2C%20applications%2C%20and%20ethics%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yuhao%22%2C%22lastName%22%3A%22Kang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Song%22%2C%22lastName%22%3A%22Gao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robert%20E.%22%2C%22lastName%22%3A%22Roth%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222024-01-16%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2023.2295943%22%2C%22ISSN%22%3A%221523-0406%2C%201545-0465%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F15230406.2023.2295943%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-11-16T17%3A51%3A08Z%22%7D%7D%5D%7D
Marasinghe, R. et al. Towards Responsible Urban Geospatial AI: Insights From the White and Grey Literatures. 2024
Remember, this is just a starting point. Explore these resources, search for specific topics within GeoAI, and contribute your own findings to broaden the knowledge base of this rapidly evolving field!
So far, only research works applying deep learning architectures have been considered but not any traditional machine learning algorithms.