Introduction
This bibliography offers a starting point for exploring GeoAI research, encompassing key publications, textbooks, and online resources. Consider it a living document, constantly evolving as the field progresses.
Core Books
5447768
core book
1
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%2236HERCPH%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Kang%20et%20al.%22%2C%22parsedDate%22%3A%222025%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKang%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-031-87421-5_16%26%23039%3B%26gt%3BArtificial%20Intelligence%20for%20Cartography%20and%20Maps%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22bookSection%22%2C%22title%22%3A%22Artificial%20Intelligence%20for%20Cartography%20and%20Maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yuhao%22%2C%22lastName%22%3A%22Kang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chenglong%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yu%22%2C%22lastName%22%3A%22Feng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guillaume%22%2C%22lastName%22%3A%22Touya%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jina%22%2C%22lastName%22%3A%22Kim%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Xiao%22%2C%22lastName%22%3A%22Huang%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Siqin%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22John%22%2C%22lastName%22%3A%22Wilson%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Peter%22%2C%22lastName%22%3A%22Kedron%22%7D%5D%2C%22abstractNote%22%3A%22The%20integration%20of%20Artificial%20Intelligence%20%28AI%29%20into%20cartography%20represents%20a%20transformative%20opportunity%20for%20advancing%20mapmaking%2C%20geovisualization%2C%20and%20geospatial%20analysis.%20This%20chapter%20explores%20the%20applications%20of%20AI%20in%20cartography%2C%20focusing%20primarily%20on%20two%20major%20streams%20of%20AI%20methods%3A%20deep%20learning%20and%20generative%20AI.%20In%20particular%2C%20notable%20deep%20learning%20methods%20include%20Deep%20Convolutional%20Neural%20Networks%20%28DCNNs%29%2C%20Graph%20Convolutional%20Neural%20Networks%20%28GCNs%29%2C%20and%20Generative%20Adversarial%20Networks%20%28GANs%29%2C%20while%20generative%20AI%20methods%20include%20Stable%20Diffusion-based%20models%20and%20Large%20Language%20Models%20%28LLMs%29.%20These%20approaches%20have%20the%20potential%20not%20only%20to%20improve%20the%20performance%20of%20traditional%20cartographic%20design%20decisions%20but%20also%20to%20enhance%20human%20creativity.%20Through%20four%20example%20case%20studies%2C%20including%20map%20object%20detection%2C%20map%20generalization%2C%20map%20style%20transfer%2C%20and%20map%20evaluation%2C%20we%20illustrate%20how%20AI%20methods%20could%20be%20employed%20in%20cartographic%20studies.%20Beyond%20technological%20advancements%2C%20this%20chapter%20also%20addresses%20the%20ethical%20and%20social%20implications%20associated%20with%20the%20use%20of%20AI%20in%20cartography.%20Issues%20such%20as%20bias%2C%20trustworthiness%2C%20commodification%2C%20geoprivacy%2C%20and%20transparency%20are%20discussed%20to%20ensure%20the%20responsible%20use%20of%20AI%20for%20cartography.%22%2C%22bookTitle%22%3A%22GeoAI%20and%20Human%20Geography%3A%20The%20Dawn%20of%20a%20New%20Spatial%20Intelligence%20Era%22%2C%22date%22%3A%222025%22%2C%22language%22%3A%22en%22%2C%22ISBN%22%3A%22978-3-031-87421-5%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-031-87421-5_16%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-27T10%3A42%3A35Z%22%7D%7D%2C%7B%22key%22%3A%22ZKBGSLBA%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Schiewe%22%2C%22parsedDate%22%3A%222025%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSchiewe%2C%20J.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-031-83023-5_20%26%23039%3B%26gt%3BArtificial%20Intelligence%20in%20Cartography%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22bookSection%22%2C%22title%22%3A%22Artificial%20Intelligence%20in%20Cartography%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jochen%22%2C%22lastName%22%3A%22Schiewe%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Jochen%22%2C%22lastName%22%3A%22Schiewe%22%7D%5D%2C%22abstractNote%22%3A%22Artificial%20Intelligence%20%28AI%29%20is%20a%20multidisciplinary%20field%20of%20research%20and%20application%20that%20can%20handle%20tasks%20that%20usually%20require%20human%20intelligence%5Cu2014accordingly%2C%20it%20can%20also%20be%20applied%20to%20Cartography.%20Since%20a%20number%20of%20conventional%20cartographic%20methods%20described%20in%20this%20book%20often%20provide%20only%20partial%2C%20but%20not%20complete%20solutions%20%28e.g.%2C%20for%20generalization%20or%20label%20placement%29%2C%20there%20is%20a%20need%20for%20other%20mindsets.%22%2C%22bookTitle%22%3A%22Cartography%3A%20Visualization%20of%20Geospatial%20Data%22%2C%22date%22%3A%222025%22%2C%22language%22%3A%22en%22%2C%22ISBN%22%3A%22978-3-031-83023-5%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-031-83023-5_20%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-27T10%3A42%3A17Z%22%7D%7D%2C%7B%22key%22%3A%22T3UU9LDN%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Gao%20et%20al.%22%2C%22parsedDate%22%3A%222023-12-08%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGao%2C%20S.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.taylorfrancis.com%5C%2Fbooks%5C%2F9781003308423%26%23039%3B%26gt%3BHandbook%20of%20Geospatial%20Artificial%20Intelligence%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22book%22%2C%22title%22%3A%22Handbook%20of%20Geospatial%20Artificial%20Intelligence%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Song%22%2C%22lastName%22%3A%22Gao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yingjie%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenwen%22%2C%22lastName%22%3A%22Li%22%7D%5D%2C%22abstractNote%22%3A%22Nowadays%2C%20artificial%20intelligence%20%28AI%29%20is%20bringing%20tremendous%20new%20opportunities%20and%20challenges%20to%20geospatial%20research.%20Its%20fast%20development%20is%20powered%20by%20theoretical%20advancement%2C%20big%20data%2C%20computer%20hardware%20%28e.g.%2C%20the%20graphics%20processing%20unit%2C%20or%20GPU%29%2C%20and%20high-performance%20computing%20platforms%20that%20support%20the%20development%2C%20training%2C%20and%20deployment%20of%20AI%20models%20within%20a%20reasonable%20amount%20of%20time.%20Recent%20years%20have%20witnessed%20significant%20advances%20in%20geospatial%20artificial%20intelligence%20%28GeoAI%29%2C%20which%20is%20the%20integration%20of%20geospatial%20studies%20and%20AI%2C%20especially%20machine%20learning%20and%20deep%20learning%20methods%20and%20the%20latest%20AI%20technologies%20in%20both%20academia%20and%20industry.%20GeoAI%20can%20be%20regarded%20as%20a%20study%20subject%20to%20develop%20intelligent%20computer%20programs%20to%20mimic%20the%20processes%20of%20human%20perception%2C%20spatial%20reasoning%2C%20and%20discovery%20about%20geographical%20phenomena%20and%20dynamics%3B%20to%20advance%20our%20knowledge%3B%20and%20to%20solve%20problems%20in%20human%20environmental%20systems%20and%20their%20interactions%2C%20with%20a%20focus%20on%20spatial%20contexts%20and%20roots%20in%20geography%20or%20geographic%20information%20science%20%28GIScience%29.%20Thus%2C%20it%20would%20require%20the%20knowledge%20of%20AI%20theory%2C%20programming%20and%20computation%20practices%20as%20well%20as%20geographic%20domain%20knowledge%20to%20be%20competent%20in%20GeoAI%20research.%20There%20have%20already%20been%20increasingly%20collaborative%20GeoAI%20studies%20for%20GIScience%2C%20remote%20sensing%2C%20physical%20environment%2C%20and%20human%20society.%20It%20is%20a%20good%20time%20to%20provide%20a%20key%20reference%20list%20for%20educators%2C%20students%2C%20researchers%2C%20and%20practitioners%20to%20keep%20up%20with%20the%20latest%20GeoAI%20research%20topics.%20This%20bibliographical%20entry%20will%20first%20review%20the%20historical%20roots%20for%20AI%20in%20geography%20and%20GIScience%20and%20then%20list%20up%20to%20ten%20selective%20recent%20works%20with%20annotations%20that%20briefly%20describe%20their%20importance%20for%20each%20topic%20of%20interest%20in%20the%20GeoAI%20landscape%2C%20ranging%20from%20fundamental%20spatial%20representation%20learning%20to%20spatial%20predictions%20and%20to%20various%20advancements%20in%20cartography%2C%20earth%20observation%2C%20social%20sensing%2C%20and%20geospatial%20semantics.%22%2C%22date%22%3A%222023-12-8%22%2C%22language%22%3A%22en%22%2C%22ISBN%22%3A%22978-1-00-330842-3%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.taylorfrancis.com%5C%2Fbooks%5C%2F9781003308423%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-07-31T16%3A24%3A49Z%22%7D%7D%2C%7B%22key%22%3A%22PKU9J3WX%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Uhl%20and%20Duan%22%2C%22parsedDate%22%3A%222021%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BUhl%2C%20J.H.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-030-55462-0_20%26%23039%3B%26gt%3BAutomating%20Information%20Extraction%20from%20Large%20Historical%20Topographic%20Map%20Archives%3A%20New%20Opportunities%20and%20Challenges%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22bookSection%22%2C%22title%22%3A%22Automating%20Information%20Extraction%20from%20Large%20Historical%20Topographic%20Map%20Archives%3A%20New%20Opportunities%20and%20Challenges%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Johannes%20H.%22%2C%22lastName%22%3A%22Uhl%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weiwei%22%2C%22lastName%22%3A%22Duan%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Martin%22%2C%22lastName%22%3A%22Werner%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Yao-Yi%22%2C%22lastName%22%3A%22Chiang%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20maps%20constitute%20unique%20sources%20of%20retrospective%20geographic%20information.%20Recently%2C%20several%20archives%20containing%20historical%20map%20series%20covering%20large%20spatial%20and%20temporal%20extents%20have%20been%20systematically%20scanned%20and%20made%20available%20to%20the%20public.%20The%20spatial-temporal%20information%20contained%20in%20such%20archives%20represents%20valuable%20information%20for%20a%20myriad%20of%20scientific%20applications.%20However%2C%20this%20geographic%20information%20needs%20to%20be%20unlocked%20and%20provided%20in%20analysis-ready%20geospatial%20data%20formats%20using%20adequate%20extraction%20and%20recognition%20techniques%20that%20can%20handle%20the%20typically%20very%20large%20volumes%20of%20complex%20data%20and%20thus%2C%20requiring%20high%20degrees%20of%20automation.%20Whereas%20traditional%20approaches%20for%20information%20extraction%20from%20map%20documents%20typically%20involve%20a%20certain%20degree%20of%20user%20interaction%2C%20recently%2C%20a%20number%20of%20methods%20has%20been%20proposed%20aiming%20to%20overcome%20such%20shortcomings%20and%20to%20fully%20automate%20these%20information%20extraction%20tasks%20based%20on%20machine%20learning%20methods%20and%20the%20automated%20generation%20of%20training%20data%2C%20among%20others.%20In%20this%20chapter%2C%20we%20provide%20an%20overview%20of%20these%20recent%20trends%2C%20on%20existing%2C%20publicly%20available%20map%20archives%2C%20and%20the%20opportunities%20and%20challenges%20associated%20with%20these%20developments.%22%2C%22bookTitle%22%3A%22Handbook%20of%20Big%20Geospatial%20Data%22%2C%22date%22%3A%222021%22%2C%22language%22%3A%22en%22%2C%22ISBN%22%3A%22978-3-030-55462-0%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-030-55462-0_20%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-26T19%3A25%3A07Z%22%7D%7D%2C%7B%22key%22%3A%228DZJ6N6A%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Chiang%20et%20al.%22%2C%22parsedDate%22%3A%222020%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BChiang%2C%20Y.-Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-319-66908-3_4%26%23039%3B%26gt%3BTraining%20Deep%20Learning%20Models%20for%20Geographic%20Feature%20Recognition%20from%20Historical%20Maps%26lt%3B%5C%2Fa%26gt%3B.%202020%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22bookSection%22%2C%22title%22%3A%22Training%20Deep%20Learning%20Models%20for%20Geographic%20Feature%20Recognition%20from%20Historical%20Maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yao-Yi%22%2C%22lastName%22%3A%22Chiang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weiwei%22%2C%22lastName%22%3A%22Duan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stefan%22%2C%22lastName%22%3A%22Leyk%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Johannes%20H.%22%2C%22lastName%22%3A%22Uhl%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Craig%20A.%22%2C%22lastName%22%3A%22Knoblock%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Yao-Yi%22%2C%22lastName%22%3A%22Chiang%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Weiwei%22%2C%22lastName%22%3A%22Duan%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Stefan%22%2C%22lastName%22%3A%22Leyk%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Johannes%20H.%22%2C%22lastName%22%3A%22Uhl%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Craig%20A.%22%2C%22lastName%22%3A%22Knoblock%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20map%20scans%20contain%20valuable%20information%20%28e.g.%2C%20historical%20locations%20of%20roads%2C%20buildings%29%20enabling%20the%20analyses%20that%20require%20long-term%20historical%20data%20of%20the%20natural%20and%20built%20environment.%20Many%20online%20archives%20now%20provide%20public%20access%20to%20a%20large%20number%20of%20historical%20map%20scans%2C%20such%20as%20the%20historical%20USGS%20%28United%20States%20Geological%20Survey%29%20topographic%20archive%20and%20the%20historical%20Ordnance%20Survey%20maps%20in%20the%20United%20Kingdom.%20Efficiently%20extracting%20information%20from%20these%20map%20scans%20remains%20a%20challenging%20task%2C%20which%20is%20typically%20achieved%20by%20manually%20digitizing%20the%20map%20content.%20In%20computer%20vision%2C%20the%20process%20of%20detecting%20and%20extracting%20the%20precise%20locations%20of%20objects%20from%20images%20is%20called%20semantic%20segmentation.%20Semantic%20segmentation%20processes%20take%20an%20image%20as%20input%20and%20classify%20each%20pixel%20of%20the%20image%20to%20an%20object%20class%20of%20interest.%20Machine%20learning%20models%20for%20semantic%20segmentation%20have%20been%20progressing%20rapidly%20with%20the%20emergence%20of%20Deep%20Convolutional%20Neural%20Networks%20%28DCNNs%20or%20CNNs%29.%20A%20key%20factor%20for%20the%20success%20of%20CNNs%20is%20the%20wide%20availability%20of%20large%20amounts%20of%20%28labeled%29%20training%20data%2C%20but%20these%20training%20data%20are%20mostly%20for%20daily%20images%20not%20for%20historical%20%28or%20any%29%20maps.%20Today%2C%20generating%20training%20data%20needs%20a%20significant%20amount%20of%20manual%20labor%20that%20is%20often%20impractical%20for%20the%20application%20of%20historical%20map%20processing.%20One%20solution%20to%20the%20problem%20of%20training%20data%20scarcity%20is%20by%20transferring%20knowledge%20learned%20from%20a%20domain%20with%20a%20sufficient%20amount%20of%20labeled%20data%20to%20another%20domain%20lacking%20labeled%20data%20%28i.e.%2C%20transfer%20learning%29.%20This%20chapter%20presents%20an%20overview%20of%20deep-learning%20semantic%20segmentation%20models%20and%20discusses%20their%20strengths%20and%20weaknesses%20concerning%20geographic%20feature%20recognition%20from%20historical%20map%20scans.%20The%20chapter%20also%20examines%20a%20number%20of%20transfer%20learning%20strategies%20that%20can%20reuse%20the%20state-of-the-art%20CNN%20models%20trained%20from%20the%20publicly%20available%20training%20datasets%20for%20the%20task%20of%20recognizing%20geographic%20features%20from%20historical%20maps.%20Finally%2C%20this%20chapter%20presents%20a%20comprehensive%20experiment%20for%20extracting%20railroad%20features%20from%20USGS%20historical%20topographic%20maps%20as%20a%20case%20study.%22%2C%22bookTitle%22%3A%22Using%20Historical%20Maps%20in%20Scientific%20Studies%3A%20Applications%2C%20Challenges%2C%20and%20Best%20Practices%22%2C%22date%22%3A%222020%22%2C%22language%22%3A%22en%22%2C%22ISBN%22%3A%22978-3-319-66908-3%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-319-66908-3_4%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-27T10%3A42%3A48Z%22%7D%7D%5D%7D
Kang, Y. et al. Artificial Intelligence for Cartography and Maps. 2025
Schiewe, J. Artificial Intelligence in Cartography. 2025
Gao, S. et al. Handbook of Geospatial Artificial Intelligence. 2023
Chiang, Y.-Y. et al. Training Deep Learning Models for Geographic Feature Recognition from Historical Maps. 2020
Core Articles
5447768
core article
1
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22RWJZ7JXV%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Janowicz%20et%20al.%22%2C%22parsedDate%22%3A%222025-09-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BJanowicz%2C%20K.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2025.2543038%26%23039%3B%26gt%3BGeoFM%3A%20how%20will%20geo-foundation%20models%20reshape%20spatial%20data%20science%20and%20GeoAI%3F%26lt%3B%5C%2Fa%26gt%3B%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22GeoFM%3A%20how%20will%20geo-foundation%20models%20reshape%20spatial%20data%20science%20and%20GeoAI%3F%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Krzysztof%22%2C%22lastName%22%3A%22Janowicz%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gengchen%22%2C%22lastName%22%3A%22Mai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weiming%22%2C%22lastName%22%3A%22Huang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rui%22%2C%22lastName%22%3A%22Zhu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ni%22%2C%22lastName%22%3A%22Lao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ling%22%2C%22lastName%22%3A%22Cai%22%7D%5D%2C%22abstractNote%22%3A%22The%20emerging%20field%20of%20geo-foundation%20models%20%28GeoFM%29%20has%20the%20potential%20to%20reshape%20GeoAI%20and%20spatial%20data%20science%20research%2C%20education%2C%20and%20practice.%20In%20this%20work%2C%20we%20motivate%20and%20define%20the%20term%20and%20put%20it%20into%20its%20historic%20context%20within%20GeoAI%20and%20spatial%20data%20science%20more%20broadly.%20Next%2C%20we%20review%20core%20datasets%2C%20models%2C%20and%20benchmarks.%20Based%20on%20this%20overview%20of%20the%20state-of-the-art%2C%20we%20introduce%20key%20research%20challenges%20for%20future%20GeoFM%20research%2C%20such%20as%20GeoAI%20scaling%20laws%2C%20geo-alignment%20of%20AI%2C%20truly%20multimodal%20GeoFM%2C%20and%20so%20on.%20Finally%2C%20we%20discuss%20potential%20risks%20of%20GeoFM%20research%20and%20outline%20the%20road%20ahead%20with%20a%20specific%20focus%20on%20the%20increasing%20role%20of%20international%20large-scale%20collaborations%20and%20the%20future%20of%20GeoAI%20and%20spatial%20data%20science%20education.%22%2C%22date%22%3A%222025-09-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2025.2543038%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2025.2543038%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-08-27T17%3A53%3A04Z%22%7D%7D%2C%7B%22key%22%3A%22G2JHXIAH%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Affolter%20et%20al.%22%2C%22parsedDate%22%3A%222025-08-26%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BAffolter%2C%20C.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2508.18959%26%23039%3B%26gt%3BGenerative%20AI%20in%20Map-Making%3A%20A%20Technical%20Exploration%20and%20Its%20Implications%20for%20Cartographers%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Generative%20AI%20in%20Map-Making%3A%20A%20Technical%20Exploration%20and%20Its%20Implications%20for%20Cartographers%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Claudio%22%2C%22lastName%22%3A%22Affolter%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sidi%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yizi%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22Traditional%20map-making%20relies%20heavily%20on%20Geographic%20Information%20Systems%20%28GIS%29%2C%20requiring%20domain%20expertise%20and%20being%20time-consuming%2C%20especially%20for%20repetitive%20tasks.%20Recent%20advances%20in%20generative%20AI%20%28GenAI%29%2C%20particularly%20image%20diffusion%20models%2C%20offer%20new%20opportunities%20for%20automating%20and%20democratizing%20the%20map-making%20process.%20However%2C%20these%20models%20struggle%20with%20accurate%20map%20creation%20due%20to%20limited%20control%20over%20spatial%20composition%20and%20semantic%20layout.%20To%20address%20this%2C%20we%20integrate%20vector%20data%20to%20guide%20map%20generation%20in%20different%20styles%2C%20specified%20by%20the%20textual%20prompts.%20Our%20model%20is%20the%20first%20to%20generate%20accurate%20maps%20in%20controlled%20styles%2C%20and%20we%20have%20integrated%20it%20into%20a%20web%20application%20to%20improve%20its%20usability%20and%20accessibility.%20We%20conducted%20a%20user%20study%20with%20professional%20cartographers%20to%20assess%20the%20fidelity%20of%20generated%20maps%2C%20the%20usability%20of%20the%20web%20application%2C%20and%20the%20implications%20of%20ever-emerging%20GenAI%20in%20map-making.%20The%20findings%20have%20suggested%20the%20potential%20of%20our%20developed%20application%20and%2C%20more%20generally%2C%20the%20GenAI%20models%20in%20helping%20both%20non-expert%20users%20and%20professionals%20in%20creating%20maps%20more%20efficiently.%20We%20have%20also%20outlined%20further%20technical%20improvements%20and%20emphasized%20the%20new%20role%20of%20cartographers%20to%20advance%20the%20paradigm%20of%20AI-assisted%20map-making.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22%22%2C%22archiveID%22%3A%22%22%2C%22date%22%3A%222025-08-26%22%2C%22DOI%22%3A%2210.1145%5C%2F3748636.3764154%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2508.18959%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-01T12%3A47%3A05Z%22%7D%7D%2C%7B%22key%22%3A%22DA2PKV8J%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ye%20et%20al.%22%2C%22parsedDate%22%3A%222025-02-05%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYe%2C%20X.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Flink.springer.com%5C%2F10.1007%5C%2Fs44212-025-00067-x%26%23039%3B%26gt%3BHuman-centered%20GeoAI%20foundation%20models%3A%20where%20GeoAI%20meets%20human%20dynamics%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Human-centered%20GeoAI%20foundation%20models%3A%20where%20GeoAI%20meets%20human%20dynamics%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xinyue%22%2C%22lastName%22%3A%22Ye%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiaxin%22%2C%22lastName%22%3A%22Du%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xinyu%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shih-Lung%22%2C%22lastName%22%3A%22Shaw%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yanjie%22%2C%22lastName%22%3A%22Fu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xishuang%22%2C%22lastName%22%3A%22Dong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhe%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ling%22%2C%22lastName%22%3A%22Wu%22%7D%5D%2C%22abstractNote%22%3A%22This%20study%20examines%20the%20role%20of%20human%20dynamics%20within%20Geospatial%20Artificial%20Intelligence%20%28GeoAI%29%2C%20highlighting%20its%20potential%20to%20reshape%20the%20geospatial%20research%20field.%20GeoAI%2C%20emerging%20from%20the%20confluence%20of%20geospatial%20technologies%20and%20artificial%20intelligence%2C%20is%20revolutionizing%20our%20comprehension%20of%20human-environmental%20interactions.%20This%20revolution%20is%20powered%20by%20large-scale%20models%20trained%20on%20extensive%20geospatial%20datasets%2C%20employing%20deep%20learning%20to%20analyze%20complex%20geospatial%20phenomena.%20Our%20findings%20highlight%20the%20synergy%20between%20human%20intelligence%20and%20AI.%20Particularly%2C%20the%20humans-as-sensors%20approach%20enhances%20the%20accuracy%20of%20geospatial%20data%20analysis%20by%20leveraging%20human-centric%20AI%2C%20while%20the%20evolving%20GeoAI%20landscape%20underscores%20the%20significance%20of%20human%5Cu2013robot%20interaction%20and%20the%20customization%20of%20GeoAI%20services%20to%20meet%20individual%20needs.%20The%20concept%20of%20mixed-experts%20GeoAI%2C%20integrating%20human%20expertise%20with%20AI%2C%20plays%20a%20crucial%20role%20in%20conducting%20sophisticated%20data%20analyses%2C%20ensuring%20that%20human%20insights%20remain%20at%20the%20forefront%20of%20this%20field.%20This%20paper%20also%20tackles%20ethical%20issues%20such%20as%20privacy%20and%20bias%2C%20which%20are%20pivotal%20for%20the%20ethical%20application%20of%20GeoAI.%20By%20exploring%20these%20human-centric%20considerations%2C%20we%20discuss%20how%20the%20collaborations%20between%20humans%20and%20AI%20transform%20the%20future%20of%20work%20at%20the%20human-technology%20frontier%20and%20redefine%20the%20role%20of%20AI%20in%20geospatial%20contexts.%22%2C%22date%22%3A%222025-02-05%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2Fs44212-025-00067-x%22%2C%22ISSN%22%3A%222731-6963%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Flink.springer.com%5C%2F10.1007%5C%2Fs44212-025-00067-x%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-12T12%3A13%3A27Z%22%7D%7D%2C%7B%22key%22%3A%22IJJGB44X%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Mai%20et%20al.%22%2C%22parsedDate%22%3A%222025-02-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BMai%2C%20G.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1569843225000159%26%23039%3B%26gt%3BTowards%20the%20next%20generation%20of%20Geospatial%20Artificial%20Intelligence%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Towards%20the%20next%20generation%20of%20Geospatial%20Artificial%20Intelligence%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gengchen%22%2C%22lastName%22%3A%22Mai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yiqun%22%2C%22lastName%22%3A%22Xie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiaowei%22%2C%22lastName%22%3A%22Jia%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ni%22%2C%22lastName%22%3A%22Lao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jinmeng%22%2C%22lastName%22%3A%22Rao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qing%22%2C%22lastName%22%3A%22Zhu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zeping%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yao-Yi%22%2C%22lastName%22%3A%22Chiang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Junfeng%22%2C%22lastName%22%3A%22Jiao%22%7D%5D%2C%22abstractNote%22%3A%22Geospatial%20Artificial%20Intelligence%20%28GeoAI%29%2C%20as%20the%20integration%20of%20geospatial%20studies%20and%20AI%2C%20has%20become%20one%20of%20the%20fastest-developing%20research%20directions%20in%20spatial%20data%20science%20and%20geography.%20This%20rapid%20change%20in%20the%20field%20calls%20for%20a%20deeper%20understanding%20of%20the%20recent%20developments%20and%20envision%20where%20the%20field%20is%20going%20in%20the%20near%20future.%20In%20this%20work%2C%20we%20provide%20a%20quantitative%20analysis%20of%20the%20GeoAI%20literature%20from%20the%20spatial%2C%20temporal%2C%20and%20semantic%20aspects.%20We%20briefly%20discuss%20the%20history%20of%20AI%20and%20GeoAI%20by%20highlighting%20some%20pioneering%20work.%20Then%20we%20discuss%20the%20current%20landscape%20of%20GeoAI%20by%20selecting%20five%20representative%20subdomains%20including%20remote%20sensing%2C%20urban%20computing%2C%20Earth%20system%20science%2C%20cartography%2C%20and%20geospatial%20semantics.%20Finally%2C%20we%20highlight%20several%20unique%20future%20research%20directions%20of%20GeoAI%20which%20are%20classified%20into%20two%20groups%3A%20GeoAI%20method%20development%20challenges%20and%20GeoAI%20Ethics%20challenges.%20Topics%20include%20heterogeneity-aware%20GeoAI%2C%20knowledge-guided%20GeoAI%2C%20spatial%20representation%20learning%2C%20geo-foundation%20models%2C%20fairness-aware%20GeoAI%2C%20privacy-aware%20GeoAI%2C%20as%20well%20as%20interpretable%20and%20explainable%20GeoAI.%20We%20hope%20our%20review%20of%20GeoAI%5Cu2019s%20past%2C%20present%2C%20and%20future%20is%20comprehensive%20and%20can%20enlighten%20the%20next%20generation%20of%20GeoAI%20research.%22%2C%22date%22%3A%222025-02-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.jag.2025.104368%22%2C%22ISSN%22%3A%221569-8432%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1569843225000159%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-07-02T16%3A30%3A02Z%22%7D%7D%2C%7B%22key%22%3A%223BDJKK5F%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Mai%20et%20al.%22%2C%22parsedDate%22%3A%222024-07-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BMai%2C%20G.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3653070%26%23039%3B%26gt%3BOn%20the%20Opportunities%20and%20Challenges%20of%20Foundation%20Models%20for%20GeoAI%20%28Vision%20Paper%29%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22On%20the%20Opportunities%20and%20Challenges%20of%20Foundation%20Models%20for%20GeoAI%20%28Vision%20Paper%29%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gengchen%22%2C%22lastName%22%3A%22Mai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weiming%22%2C%22lastName%22%3A%22Huang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jin%22%2C%22lastName%22%3A%22Sun%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Suhang%22%2C%22lastName%22%3A%22Song%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Deepak%22%2C%22lastName%22%3A%22Mishra%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ninghao%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Song%22%2C%22lastName%22%3A%22Gao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tianming%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gao%22%2C%22lastName%22%3A%22Cong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yingjie%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chris%22%2C%22lastName%22%3A%22Cundy%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ziyuan%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rui%22%2C%22lastName%22%3A%22Zhu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ni%22%2C%22lastName%22%3A%22Lao%22%7D%5D%2C%22abstractNote%22%3A%22Large%20pre-trained%20models%2C%20also%20known%20as%20foundation%20models%20%28FMs%29%2C%20are%20trained%20in%20a%20task-agnostic%20manner%20on%20large-scale%20data%20and%20can%20be%20adapted%20to%20a%20wide%20range%20of%20downstream%20tasks%20by%20fine-tuning%2C%20few-shot%2C%20or%20even%20zero-shot%20learning.%20Despite%20their%20successes%20in%20language%20and%20vision%20tasks%2C%20we%20have%20not%20yet%20seen%20an%20attempt%20to%20develop%20foundation%20models%20for%20geospatial%20artificial%20intelligence%20%28GeoAI%29.%20In%20this%20work%2C%20we%20explore%20the%20promises%20and%20challenges%20of%20developing%20multimodal%20foundation%20models%20for%20GeoAI.%20We%20first%20investigate%20the%20potential%20of%20many%20existing%20FMs%20by%20testing%20their%20performances%20on%20seven%20tasks%20across%20multiple%20geospatial%20domains%2C%20including%20Geospatial%20Semantics%2C%20Health%20Geography%2C%20Urban%20Geography%2C%20and%20Remote%20Sensing.%20Our%20results%20indicate%20that%20on%20several%20geospatial%20tasks%20that%20only%20involve%20text%20modality%2C%20such%20as%20toponym%20recognition%2C%20location%20description%20recognition%2C%20and%20US%20state-level%5C%2Fcounty-level%20dementia%20time%20series%20forecasting%2C%20the%20task-agnostic%20large%20learning%20models%20%28LLMs%29%20can%20outperform%20task-specific%20fully%20supervised%20models%20in%20a%20zero-shot%20or%20few-shot%20learning%20setting.%20However%2C%20on%20other%20geospatial%20tasks%2C%20especially%20tasks%20that%20involve%20multiple%20data%20modalities%20%28e.g.%2C%20POI-based%20urban%20function%20classification%2C%20street%20view%20image%5Cu2013based%20urban%20noise%20intensity%20classification%2C%20and%20remote%20sensing%20image%20scene%20classification%29%2C%20existing%20FMs%20still%20underperform%20task-specific%20models.%20Based%20on%20these%20observations%2C%20we%20propose%20that%20one%20of%20the%20major%20challenges%20of%20developing%20an%20FM%20for%20GeoAI%20is%20to%20address%20the%20multimodal%20nature%20of%20geospatial%20tasks.%20After%20discussing%20the%20distinct%20challenges%20of%20each%20geospatial%20data%20modality%2C%20we%20suggest%20the%20possibility%20of%20a%20multimodal%20FM%20that%20can%20reason%20over%20various%20types%20of%20geospatial%20data%20through%20geospatial%20alignments.%20We%20conclude%20this%20article%20by%20discussing%20the%20unique%20risks%20and%20challenges%20to%20developing%20such%20a%20model%20for%20GeoAI.%22%2C%22date%22%3A%22Juli%201%2C%202024%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3653070%22%2C%22ISSN%22%3A%222374-0353%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3653070%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-12T22%3A35%3A40Z%22%7D%7D%2C%7B%22key%22%3A%22WB5RWJ9Z%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Hu%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-29%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BHu%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F19475683.2024.2309866%26%23039%3B%26gt%3BA%20five-year%20milestone%3A%20reflections%20on%20advances%20and%20limitations%20in%20GeoAI%20research%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20five-year%20milestone%3A%20reflections%20on%20advances%20and%20limitations%20in%20GeoAI%20research%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yingjie%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22Goodchild%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22A-Xing%22%2C%22lastName%22%3A%22Zhu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22May%22%2C%22lastName%22%3A%22Yuan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Orhun%22%2C%22lastName%22%3A%22Aydin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Budhendra%22%2C%22lastName%22%3A%22Bhaduri%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Song%22%2C%22lastName%22%3A%22Gao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenwen%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dalton%22%2C%22lastName%22%3A%22Lunga%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shawn%22%2C%22lastName%22%3A%22Newsam%22%7D%5D%2C%22abstractNote%22%3A%22The%20Annual%20Meeting%20of%20the%20American%20Association%20of%20Geographers%20%28AAG%29%20in%202023%20marked%20a%20five-year%20milestone%20since%20the%20first%20Geospatial%20Artificial%20Intelligence%20%28GeoAI%29%20Symposium%20was%20held%20at%20AAG%20in%202018.%20In%20the%20past%20five%20years%2C%20progress%20has%20been%20made%20while%20open%20questions%20remain.%20In%20this%20context%2C%20we%20organized%20an%20AAG%20panel%20and%20invited%20five%20panellists%20to%20discuss%20the%20advances%20and%20limitations%20in%20GeoAI%20research.%20The%20panellists%20commended%20the%20successes%2C%20such%20as%20the%20development%20of%20spatially%20explicit%20models%2C%20the%20production%20of%20large-scale%20geographic%20datasets%2C%20and%20the%20use%20of%20GeoAI%20to%20address%20real-world%20problems.%20The%20panellists%20also%20shared%20their%20thoughts%20on%20limitations%20in%20current%20GeoAI%20research%2C%20which%20were%20considered%20as%20opportunities%20to%20engage%20theories%20in%20geography%2C%20enhance%20model%20explainability%2C%20quantify%20uncertainty%2C%20and%20improve%20model%20generalizability.%20This%20article%20summarizes%20the%20presentations%20from%20the%20panellists%20and%20also%20provides%20after-panel%20thoughts%20from%20the%20organizers.%20We%20hope%20that%20this%20article%20can%20make%20these%20thoughts%20more%20accessible%20to%20interested%20readers%20and%20help%20stimulate%20new%20ideas%20for%20future%20breakthroughs.%22%2C%22date%22%3A%222024-01-29%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1080%5C%2F19475683.2024.2309866%22%2C%22ISSN%22%3A%221947-5683%2C%201947-5691%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F19475683.2024.2309866%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-07-31T16%3A28%3A57Z%22%7D%7D%2C%7B%22key%22%3A%226A6ZJC4D%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Kang%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-16%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKang%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F15230406.2023.2295943%26%23039%3B%26gt%3BArtificial%20intelligence%20studies%20in%20cartography%3A%20a%20review%20and%20synthesis%20of%20methods%2C%20applications%2C%20and%20ethics%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Artificial%20intelligence%20studies%20in%20cartography%3A%20a%20review%20and%20synthesis%20of%20methods%2C%20applications%2C%20and%20ethics%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yuhao%22%2C%22lastName%22%3A%22Kang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Song%22%2C%22lastName%22%3A%22Gao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robert%20E.%22%2C%22lastName%22%3A%22Roth%22%7D%5D%2C%22abstractNote%22%3A%22The%20past%20decade%20has%20witnessed%20the%20rapid%20development%20of%20geospatial%20artificial%20intelligence%20%28GeoAI%29%20primarily%20due%20to%20the%20ground-breaking%20achievements%20in%20deep%20learning%20and%20machine%20learning.%20A%20growing%20number%20of%20scholars%20from%20cartography%20have%20demonstrated%20successfully%20that%20GeoAI%20can%20accelerate%20previously%20complex%20cartographic%20design%20tasks%20and%20even%20enable%20cartographic%20creativity%20in%20new%20ways.%20Despite%20the%20promise%20of%20GeoAI%2C%20researchers%20and%20practitioners%20have%20growing%20concerns%20about%20the%20ethical%20issues%20of%20GeoAI%20for%20cartography.%20In%20this%20paper%2C%20we%20conducted%20a%20systematic%20content%20analysis%20and%20narrative%20synthesis%20of%20research%20studies%20integrating%20GeoAI%20and%20cartography%20to%20summarize%20current%20research%20and%20development%20trends%20regarding%20the%20usage%20of%20GeoAI%20for%20cartographic%20design.%20Based%20on%20this%20review%20and%20synthesis%2C%20we%20first%20identify%20dimensions%20of%20GeoAI%20methods%20for%20cartography%20such%20as%20data%20sources%2C%20data%20formats%2C%20map%20evaluations%2C%20and%20six%20contemporary%20GeoAI%20models%2C%20each%20of%20which%20serves%20a%20variety%20of%20cartographic%20tasks.%20These%20models%20include%20decision%20trees%2C%20knowledge%20graph%20and%20semantic%20web%20technologies%2C%20deep%20convolutional%20neural%20networks%2C%20generative%20adversarial%20networks%2C%20graph%20neural%20networks%2C%20and%20reinforcement%20learning.%20Further%2C%20we%20summarize%20seven%20cartographic%20design%20applications%20where%20GeoAI%20have%20been%20effectively%20employed%3A%20generalization%2C%20symbolization%2C%20typography%2C%20map%20reading%2C%20map%20interpretation%2C%20map%20analysis%2C%20and%20map%20production.%20We%20also%20raise%20five%20potential%20ethical%20challenges%20that%20need%20to%20be%20addressed%20in%20the%20integration%20of%20GeoAI%20for%20cartography%3A%20commodification%2C%20responsibility%2C%20privacy%2C%20bias%2C%20and%20%28together%29%20transparency%2C%20explainability%2C%20and%20provenance.%20We%20conclude%20by%20identifying%20four%20potential%20research%20directions%20for%20future%20cartographic%20research%20with%20GeoAI%3A%20GeoAI-enabled%20active%20cartographic%20symbolism%2C%20human-in-the-loop%20GeoAI%20for%20cartography%2C%20GeoAI-based%20mapping-as-a-service%2C%20and%20generative%20GeoAI%20for%20cartography.%22%2C%22date%22%3A%222024-01-16%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2023.2295943%22%2C%22ISSN%22%3A%221523-0406%2C%201545-0465%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F15230406.2023.2295943%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-07-31T16%3A30%3A55Z%22%7D%7D%2C%7B%22key%22%3A%227388BIV6%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Harrie%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BHarrie%2C%20L.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F15230406.2023.2295948%26%23039%3B%26gt%3BMachine%20learning%20in%20cartography%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Machine%20learning%20in%20cartography%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lars%22%2C%22lastName%22%3A%22Harrie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guillaume%22%2C%22lastName%22%3A%22Touya%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rachid%22%2C%22lastName%22%3A%22Oucheikh%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tinghua%22%2C%22lastName%22%3A%22Ai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Azelle%22%2C%22lastName%22%3A%22Courtial%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kai-Florian%22%2C%22lastName%22%3A%22Richter%22%7D%5D%2C%22abstractNote%22%3A%22Machine%20learning%20is%20increasingly%20used%20as%20a%20computing%20paradigm%20in%20cartographic%20research.%20In%20this%20extended%20editorial%2C%20we%20provide%20some%20background%20of%20the%20papers%20in%20the%20CaGIS%20special%20issue%20Machine%20Learning%20in%20Cartography%20with%20a%20special%20focus%20on%20pattern%20recognition%20in%20maps%2C%20cartographic%20generalization%2C%20style%20transfer%2C%20and%20map%20labeling.%20In%20addition%2C%20the%20paper%20includes%20a%20discussion%20about%20map%20encodings%20for%20machine%20learning%20applications%20and%20the%20possible%20need%20for%20explicit%20cartographic%20knowledge%20and%20procedural%20modeling%20in%20cartographic%20machine%20learning%20models.%22%2C%22date%22%3A%222024-01-02%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2023.2295948%22%2C%22ISSN%22%3A%221523-0406%2C%201545-0465%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F15230406.2023.2295948%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-07-31T16%3A28%3A16Z%22%7D%7D%2C%7B%22key%22%3A%22N4UP28KW%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Robinson%20et%20al.%22%2C%22parsedDate%22%3A%222023-11-13%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BRobinson%2C%20A.C.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3615886.3627734%26%23039%3B%26gt%3BCartography%20in%20GeoAI%3A%20Emerging%20Themes%20and%20Research%20Challenges%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Cartography%20in%20GeoAI%3A%20Emerging%20Themes%20and%20Research%20Challenges%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Anthony%20C.%22%2C%22lastName%22%3A%22Robinson%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Arzu%22%2C%22lastName%22%3A%22%5Cu00c7%5Cu00f6ltekin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Amy%20L.%22%2C%22lastName%22%3A%22Griffin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Florian%22%2C%22lastName%22%3A%22Ledermann%22%7D%5D%2C%22abstractNote%22%3A%22The%20emergence%20of%20prompt-driven%20artificial%20intelligence%20%28AI%29%20techniques%20for%20the%20rapid%20creation%20and%20iterative%20ideation%20of%20text%2C%20images%2C%20and%20code%20has%20disrupted%20the%20trajectory%20of%20science%2C%20technology%2C%20and%20society.%20Geospatial%20AI%20%28GeoAI%29%20aims%20to%20develop%20approaches%20for%20AI%20that%20target%20spatio-temporal%20problem%20contexts.%20Cartography%20is%20a%20key%20constituent%20area%20of%20GeoAI%2C%20providing%20the%20mechanism%20by%20which%20visual%20exploration%2C%20analysis%2C%20synthesis%2C%20and%20communication%20are%20made%20possible.%20In%20a%20recent%20research%20workshop%20with%2035%20academic%20cartographers%20from%20institutions%20in%20the%20U.S.%2C%20Europe%2C%20Australia%2C%20and%20Africa%2C%20we%20fielded%2017%20talks%20on%20emerging%20research%20areas%20in%20Cartography%20and%20AI%2C%20and%20in%20collaborative%20activities%20with%20participants%20we%20developed%20many%20new%20research%20questions.%20In%20this%20paper%20we%20highlight%20the%20key%20themes%20emerging%20from%20our%20workshop%2C%20characterizing%20ongoing%20work%20as%20well%20as%20new%20challenges%20that%20lie%20at%20the%20intersections%20of%20Cartography%20and%20AI.%22%2C%22date%22%3A%222023-11-13%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%206th%20ACM%20SIGSPATIAL%20International%20Workshop%20on%20AI%20for%20Geographic%20Knowledge%20Discovery%22%2C%22conferenceName%22%3A%22SIGSPATIAL%20%2723%3A%20The%2031st%20ACM%20International%20Conference%20on%20Advances%20in%20Geographic%20Information%20Systems%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1145%5C%2F3615886.3627734%22%2C%22ISBN%22%3A%229798400703485%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3615886.3627734%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-07-31T16%3A32%3A58Z%22%7D%7D%2C%7B%22key%22%3A%22GMND4EA5%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Chen%20et%20al.%22%2C%22parsedDate%22%3A%222023%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BChen%2C%20M.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Flinkinghub.elsevier.com%5C%2Fretrieve%5C%2Fpii%5C%2FS0012825223001277%26%23039%3B%26gt%3BArtificial%20intelligence%20and%20visual%20analytics%20in%20geographical%20space%20and%20cyberspace%3A%20Research%20opportunities%20and%20challenges%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Artificial%20intelligence%20and%20visual%20analytics%20in%20geographical%20space%20and%20cyberspace%3A%20Research%20opportunities%20and%20challenges%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Christophe%22%2C%22lastName%22%3A%22Claramunt%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Arzu%22%2C%22lastName%22%3A%22%5Cu00c7%5Cu00f6ltekin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xintao%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peng%22%2C%22lastName%22%3A%22Peng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Anthony%20C.%22%2C%22lastName%22%3A%22Robinson%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dajiang%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Josef%22%2C%22lastName%22%3A%22Strobl%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22John%20P.%22%2C%22lastName%22%3A%22Wilson%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22Batty%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mei-Po%22%2C%22lastName%22%3A%22Kwan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Maryam%22%2C%22lastName%22%3A%22Lotfian%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fran%5Cu00e7ois%22%2C%22lastName%22%3A%22Golay%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22St%5Cu00e9phane%22%2C%22lastName%22%3A%22Joost%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jens%22%2C%22lastName%22%3A%22Ingensand%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ahmad%20M.%22%2C%22lastName%22%3A%22Senousi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tao%22%2C%22lastName%22%3A%22Cheng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Temenoujka%22%2C%22lastName%22%3A%22Bandrova%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Milan%22%2C%22lastName%22%3A%22Konecny%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paul%20M.%22%2C%22lastName%22%3A%22Torrens%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Alexander%22%2C%22lastName%22%3A%22Klippel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Songnian%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fengyuan%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Li%22%2C%22lastName%22%3A%22He%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jinfeng%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Carlo%22%2C%22lastName%22%3A%22Ratti%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Olaf%22%2C%22lastName%22%3A%22Kolditz%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hui%22%2C%22lastName%22%3A%22Lin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guonian%22%2C%22lastName%22%3A%22L%5Cu00fc%22%7D%5D%2C%22abstractNote%22%3A%22In%20recent%20decades%2C%20we%20have%20witnessed%20great%20advances%20on%20the%20Internet%20of%20Things%2C%20mobile%20devices%2C%20sensor-based%20systems%2C%20and%20resulting%20big%20data%20infrastructures%2C%20which%20have%20gradually%2C%20yet%20fundamentally%20influenced%20the%20way%20people%20interact%20with%20and%20in%20the%20digital%20and%20physical%20world.%20Many%20human%20activities%20now%20not%20only%20operate%20in%20geographical%20%28physical%29%20space%20but%20also%20in%20cyberspace.%20Such%20changes%20have%20triggered%20a%20paradigm%20shift%20in%20geographic%20information%20science%20%28GIScience%29%2C%20as%20cyberspace%20brings%20new%20perspectives%20for%20the%20roles%20played%20by%20spatial%20and%20temporal%20dimensions%2C%20e.g.%2C%20the%20dilemma%20of%20placelessness%20and%20possible%20timelessness.%20As%20a%20discipline%20at%20the%20brink%20of%20even%20bigger%20changes%20made%20possible%20by%20machine%20learning%20and%20artificial%20intelligence%2C%20this%20paper%20highlights%20the%20challenges%20and%20opportunities%20associated%20with%20geographical%20space%20in%20relation%20to%20cyberspace%2C%20with%20a%20particular%20focus%20on%20data%20analytics%20and%20visualization%2C%20including%20extended%20AI%20capabilities%20and%20virtual%20reality%20representations.%20Consequently%2C%20we%20encourage%20the%20creation%20of%20synergies%20between%20the%20processing%20and%20analysis%20of%20geographical%20and%20cyber%20data%20to%20improve%20sustainability%20and%20solve%20complex%20problems%20with%20geospatial%20applications%20and%20other%20digital%20advancements%20in%20urban%20and%20environmental%20sciences.%22%2C%22date%22%3A%2206%5C%2F2023%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.earscirev.2023.104438%22%2C%22ISSN%22%3A%2200128252%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Flinkinghub.elsevier.com%5C%2Fretrieve%5C%2Fpii%5C%2FS0012825223001277%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-07-31T16%3A21%3A07Z%22%7D%7D%2C%7B%22key%22%3A%22D8Y2L67J%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20and%20Hsu%22%2C%22parsedDate%22%3A%222022-07%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLi%2C%20W.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F7%5C%2F385%26%23039%3B%26gt%3BGeoAI%20for%20Large-Scale%20Image%20Analysis%20and%20Machine%20Vision%3A%20Recent%20Progress%20of%20Artificial%20Intelligence%20in%20Geography%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22GeoAI%20for%20Large-Scale%20Image%20Analysis%20and%20Machine%20Vision%3A%20Recent%20Progress%20of%20Artificial%20Intelligence%20in%20Geography%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenwen%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chia-Yu%22%2C%22lastName%22%3A%22Hsu%22%7D%5D%2C%22abstractNote%22%3A%22GeoAI%2C%20or%20geospatial%20artificial%20intelligence%2C%20has%20become%20a%20trending%20topic%20and%20the%20frontier%20for%20spatial%20analytics%20in%20Geography.%20Although%20much%20progress%20has%20been%20made%20in%20exploring%20the%20integration%20of%20AI%20and%20Geography%2C%20there%20is%20yet%20no%20clear%20definition%20of%20GeoAI%2C%20its%20scope%20of%20research%2C%20or%20a%20broad%20discussion%20of%20how%20it%20enables%20new%20ways%20of%20problem%20solving%20across%20social%20and%20environmental%20sciences.%20This%20paper%20provides%20a%20comprehensive%20overview%20of%20GeoAI%20research%20used%20in%20large-scale%20image%20analysis%2C%20and%20its%20methodological%20foundation%2C%20most%20recent%20progress%20in%20geospatial%20applications%2C%20and%20comparative%20advantages%20over%20traditional%20methods.%20We%20organize%20this%20review%20of%20GeoAI%20research%20according%20to%20different%20kinds%20of%20image%20or%20structured%20data%2C%20including%20satellite%20and%20drone%20images%2C%20street%20views%2C%20and%20geo-scientific%20data%2C%20as%20well%20as%20their%20applications%20in%20a%20variety%20of%20image%20analysis%20and%20machine%20vision%20tasks.%20While%20different%20applications%20tend%20to%20use%20diverse%20types%20of%20data%20and%20models%2C%20we%20summarized%20six%20major%20strengths%20of%20GeoAI%20research%2C%20including%20%281%29%20enablement%20of%20large-scale%20analytics%3B%20%282%29%20automation%3B%20%283%29%20high%20accuracy%3B%20%284%29%20sensitivity%20in%20detecting%20subtle%20changes%3B%20%285%29%20tolerance%20of%20noise%20in%20data%3B%20and%20%286%29%20rapid%20technological%20advancement.%20As%20GeoAI%20remains%20a%20rapidly%20evolving%20field%2C%20we%20also%20describe%20current%20knowledge%20gaps%20and%20discuss%20future%20research%20directions.%22%2C%22date%22%3A%222022%5C%2F7%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi11070385%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F7%5C%2F385%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-20T17%3A11%3A02Z%22%7D%7D%2C%7B%22key%22%3A%22JUTDX3S7%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ai%22%2C%22parsedDate%22%3A%222022-05-31%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BAi%2C%20T.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fsystems.enpress-publisher.com%5C%2Findex.php%5C%2FJGC%5C%2Farticle%5C%2Fview%5C%2F1670%26%23039%3B%26gt%3BSome%20thoughts%20on%20deep%20learning%20empowering%20cartography%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Some%20thoughts%20on%20deep%20learning%20empowering%20cartography%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tinghua%22%2C%22lastName%22%3A%22Ai%22%7D%5D%2C%22abstractNote%22%3A%22Cartography%20includes%20two%20major%20tasks%3A%20map%20making%20and%20map%20application%2C%20which%20is%20inextricably%20linked%20to%20artificial%20intelligence%20technology.%20The%20cartographic%20expert%20system%20experienced%20the%20intelligent%20expression%20of%20symbolism.%20After%20the%20spatial%20optimization%20decision%20of%20behaviorism%20intelligent%20expression%2C%20cartography%20faces%20the%20combination%20of%20deep%20learning%20under%20connectionism%20to%20improve%20the%20intelligent%20level%20of%20cartography.%20This%20paper%20discusses%20three%20problems%20about%20the%20proposition%20of%20%5Cu201cdeep%20learning%20%2B%20cartography%5Cu201d.%20One%20is%20the%20consistency%20between%20the%20deep%20learning%20method%20and%20the%20map%20space%20problem%20solving%20strategy%2C%20based%20on%20gradient%20descent%2C%20local%20correlation%2C%20feature%20reduction%20and%20non-linear%20nature%20that%20answer%20the%20feasibility%20of%20the%20combination%20of%20%5Cu201cdeep%20learning%20%2B%20cartography%5Cu201d%3B%20the%20second%20is%20to%20analyze%20the%20challenges%20faced%20by%20the%20combination%20of%20cartography%20from%20its%20unique%20disciplinary%20characteristics%20and%20technical%20environment%2C%20involving%20the%20non-standard%20organization%20of%20map%20data%2C%20professional%20requirements%20for%20sample%20establishment%2C%20the%20integration%20of%20geometric%20and%20geographical%20features%2C%20as%20well%20as%20the%20inherent%20spatial%20scale%20of%20the%20map%3B%20thirdly%2C%20the%20entry%20points%20and%20specific%20methods%20for%20integrating%20map%20making%20and%20map%20application%20into%20deep%20learning%20are%20discussed%20respectively.%22%2C%22date%22%3A%222022-05-31%22%2C%22language%22%3A%22en-US%22%2C%22DOI%22%3A%2210.24294%5C%2Fjgc.v5i2.1670%22%2C%22ISSN%22%3A%222578-1979%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fsystems.enpress-publisher.com%5C%2Findex.php%5C%2FJGC%5C%2Farticle%5C%2Fview%5C%2F1670%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-08-13T21%3A33%3A11Z%22%7D%7D%2C%7B%22key%22%3A%22G655RGJ4%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Usery%20et%20al.%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BUsery%2C%20E.L.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2F10.1111%5C%2Ftgis.12830%26%23039%3B%26gt%3BGeoAI%20in%20the%20US%20Geological%20Survey%20for%20topographic%20mapping%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22GeoAI%20in%20the%20US%20Geological%20Survey%20for%20topographic%20mapping%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22E.%20Lynn%22%2C%22lastName%22%3A%22Usery%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Samantha%20T.%22%2C%22lastName%22%3A%22Arundel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ethan%22%2C%22lastName%22%3A%22Shavers%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lawrence%22%2C%22lastName%22%3A%22Stanislawski%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Philip%22%2C%22lastName%22%3A%22Thiem%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dalia%22%2C%22lastName%22%3A%22Varanka%22%7D%5D%2C%22abstractNote%22%3A%22Geospatial%20artificial%20intelligence%20%28GeoAI%29%20can%20be%20defined%20broadly%20as%20the%20application%20of%20artificial%20intelligence%20methods%20and%20techniques%20to%20geospatial%20data%2C%20processes%2C%20models%2C%20and%20applications.%20The%20application%20of%20these%20methods%20to%20topographic%20data%20and%20phenomena%20is%20a%20focus%20of%20research%20in%20the%20US%20Geological%20Survey%20%28USGS%29.%20Specifically%2C%20the%20USGS%20has%20researched%20and%20developed%20applications%20in%20terrain%20feature%20extraction%2C%20hydrographic%20network%20extraction%2C%20and%20semantic%20modeling.%20This%20article%20is%20a%20documentation%20of%20the%20recent%20work%20and%20current%20state%20of%20research%20and%20development.%20The%20article%20helps%20define%20the%20accomplishments%20and%20directions%20of%20research%20and%20applications%20in%20fields%20of%20GeoAI%20for%20topographic%20mapping%20within%20the%20USGS%20and%20more%20broadly.%22%2C%22date%22%3A%2202%5C%2F2022%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1111%5C%2Ftgis.12830%22%2C%22ISSN%22%3A%221361-1682%2C%201467-9671%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2F10.1111%5C%2Ftgis.12830%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-07-31T16%3A33%3A41Z%22%7D%7D%2C%7B%22key%22%3A%22KXQK33QT%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%22%2C%22parsedDate%22%3A%222020-06-30%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLi%2C%20W.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fjosis.org%5C%2Findex.php%5C%2Fjosis%5C%2Farticle%5C%2Fview%5C%2F116%26%23039%3B%26gt%3BGeoAI%3A%20Where%20machine%20learning%20and%20big%20data%20converge%20in%20GIScience%26lt%3B%5C%2Fa%26gt%3B.%202020%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22GeoAI%3A%20Where%20machine%20learning%20and%20big%20data%20converge%20in%20GIScience%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenwen%22%2C%22lastName%22%3A%22Li%22%7D%5D%2C%22abstractNote%22%3A%22In%20this%20paper%20GeoAI%20is%20introduced%20as%20an%20emergent%20spatial%20analytical%20framework%20for%20data-intensive%20GIScience.%20As%20the%20new%20fuel%20of%20geospatial%20research%2C%20GeoAI%20leverages%20recent%20breakthroughs%20in%20machine%20learning%20and%20advanced%20computing%20to%20achieve%20scalable%20processing%20and%20intelligent%20analysis%20of%20geospatial%20big%20data.%20The%20three-pillar%20view%20of%20GeoAI%2C%20its%20two%20methodological%20threads%20%28data-driven%20and%20knowledge-driven%29%2C%20as%20well%20as%20their%20geospatial%20applications%20are%20highlighted.%20The%20paper%20concludes%20with%20discussion%20of%20remaining%20challenges%20and%20future%20research%20directions%20of%20GeoAI.%22%2C%22date%22%3A%222020-06-30%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISSN%22%3A%221948-660X%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fjosis.org%5C%2Findex.php%5C%2Fjosis%5C%2Farticle%5C%2Fview%5C%2F116%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-11-16T17%3A19%3A06Z%22%7D%7D%2C%7B%22key%22%3A%223BT5LBAP%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Janowicz%20et%20al.%22%2C%22parsedDate%22%3A%222020-04-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BJanowicz%2C%20K.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F13658816.2019.1684500%26%23039%3B%26gt%3BGeoAI%3A%20spatially%20explicit%20artificial%20intelligence%20techniques%20for%20geographic%20knowledge%20discovery%20and%20beyond%26lt%3B%5C%2Fa%26gt%3B.%202020%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22GeoAI%3A%20spatially%20explicit%20artificial%20intelligence%20techniques%20for%20geographic%20knowledge%20discovery%20and%20beyond%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Krzysztof%22%2C%22lastName%22%3A%22Janowicz%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Song%22%2C%22lastName%22%3A%22Gao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Grant%22%2C%22lastName%22%3A%22McKenzie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yingjie%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Budhendra%22%2C%22lastName%22%3A%22Bhaduri%22%7D%5D%2C%22abstractNote%22%3A%22In%20this%20editorial%2C%20we%20motivated%20the%20need%20for%20GeoAI%20research%20and%20reviewed%20its%20origins.%20We%20have%20outlined%20three%20significant%20research%20directions%2C%20namely%20spatially%20explicit%20models%2C%20question%20answering%2C%20and%20social%20sensing%2C%20discussed%20the%20need%20for%20high-quality%20datasets%20and%20improved%20reproducibility%2C%20and%20presented%20a%20GeoAI%20moonshot%20as%20an%20example%20of%20a%20shared%20vision%20for%20the%20next%20ten%20years.%20We%20also%20hope%20that%20GeoAI%20and%20spatial%20data%20science%20more%20broadly%20will%20bring%20closer%20together%20the%20multitude%20of%20domains%20that%20work%20on%20or%20with%20spatiotemporal%20information.%20Finally%2C%20we%20believe%20that%20ethical%20consideration%20should%20be%20an%20essential%20part%20of%20responsible%20GeoAI%20research%2C%20both%20on%20the%20level%20of%20individual%20researchers%20as%20well%20as%20the%20community%20as%20a%20whole.%20We%20believe%20that%20the%20breadth%20of%20topics%20and%20techniques%20in%20this%20special%20issue%20is%20well%20representative%20of%20the%20current%20state-of-the-art%20in%20GeoAI.%22%2C%22date%22%3A%222020-04-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2019.1684500%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F13658816.2019.1684500%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-07-31T16%3A30%3A08Z%22%7D%7D%2C%7B%22key%22%3A%22N4ES9CR9%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Hu%20et%20al.%22%2C%22parsedDate%22%3A%222019%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BHu%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3377000.3377002%26%23039%3B%26gt%3BGeoAI%20at%20ACM%20SIGSPATIAL%3A%20progress%2C%20challenges%2C%20and%20future%20directions%26lt%3B%5C%2Fa%26gt%3B.%202019%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22GeoAI%20at%20ACM%20SIGSPATIAL%3A%20progress%2C%20challenges%2C%20and%20future%20directions%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yingjie%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Song%22%2C%22lastName%22%3A%22Gao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dalton%22%2C%22lastName%22%3A%22Lunga%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenwen%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shawn%22%2C%22lastName%22%3A%22Newsam%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Budhendra%22%2C%22lastName%22%3A%22Bhaduri%22%7D%5D%2C%22abstractNote%22%3A%22Geospatial%20artificial%20intelligence%20%28GeoAI%29%20is%20an%20interdisciplinary%20field%20that%20has%20received%20tremendous%20attention%20from%20both%20academia%20and%20industry%20in%20recent%20years.%20This%20article%20reviews%20the%20series%20of%20GeoAI%20workshops%20held%20at%20the%20Association%20for%20Computing%20Machinery%20%28ACM%29%20International%20Conference%20on%20Advances%20in%20Geographic%20Information%20Systems%20%28SIGSPATIAL%29%20since%202017.%20These%20workshops%20have%20provided%20researchers%20a%20forum%20to%20present%20GeoAI%20advances%20covering%20a%20wide%20range%20of%20topics%2C%20such%20as%20geospatial%20image%20processing%2C%20transportation%20modeling%2C%20public%20health%2C%20and%20digital%20humanities.%20We%20provide%20a%20summary%20of%20these%20topics%20and%20the%20research%20articles%20presented%20at%20the%202017%2C%202018%2C%20and%202019%20GeoAI%20workshops.%20We%20conclude%20with%20a%20list%20of%20open%20research%20directions%20for%20this%20rapidly%20advancing%20field.%22%2C%22date%22%3A%22Dezember%2017%2C%202019%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3377000.3377002%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3377000.3377002%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-20T17%3A17%3A25Z%22%7D%7D%5D%7D
Janowicz, K. et al. GeoFM: how will geo-foundation models reshape spatial data science and GeoAI? 2025
Affolter, C. et al. Generative AI in Map-Making: A Technical Exploration and Its Implications for Cartographers. 2025
Ye, X. et al. Human-centered GeoAI foundation models: where GeoAI meets human dynamics. 2025
Mai, G. et al. Towards the next generation of Geospatial Artificial Intelligence. 2025
Mai, G. et al. On the Opportunities and Challenges of Foundation Models for GeoAI (Vision Paper). 2024
Hu, Y. et al. A five-year milestone: reflections on advances and limitations in GeoAI research. 2024
Harrie, L. et al. Machine learning in cartography. 2024
Robinson, A.C. et al. Cartography in GeoAI: Emerging Themes and Research Challenges. 2023
Usery, E.L. et al. GeoAI in the US Geological Survey for topographic mapping. 2022
Hu, Y. et al. GeoAI at ACM SIGSPATIAL: progress, challenges, and future directions. 2019
Map Distinction
5447768
map distinction
1
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22VE6T8N8V%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Nguyen%20et%20al.%22%2C%22parsedDate%22%3A%222024-08-15%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BNguyen%2C%20P.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F10660742%5C%2F%26%23039%3B%26gt%3BDetecting%20Omissions%20in%20Geographic%20Maps%20through%20Computer%20Vision%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Detecting%20Omissions%20in%20Geographic%20Maps%20through%20Computer%20Vision%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Phuc%22%2C%22lastName%22%3A%22Nguyen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Anh%22%2C%22lastName%22%3A%22Do%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Minh%22%2C%22lastName%22%3A%22Hoai%22%7D%5D%2C%22abstractNote%22%3A%22This%20paper%20explores%20the%20application%20of%20computer%20vision%20technologies%20to%20the%20analysis%20of%20maps%2C%20an%20area%20with%20substantial%20historical%2C%20cultural%2C%20and%20political%20significance.%20Our%20focus%20is%20on%20developing%20and%20evaluating%20a%20method%20for%20automatically%20identifying%20maps%20that%20depict%20specific%20regions%20and%20feature%20landmarks%20with%20designated%20names%2C%20a%20task%20that%20involves%20complex%20challenges%20due%20to%20the%20diverse%20styles%20and%20methods%20used%20in%20map%20creation.%20We%20address%20three%20main%20subtasks%3A%20differentiating%20maps%20from%20non-maps%2C%20verifying%20the%20accuracy%20of%20the%20region%20depicted%2C%20and%20confirming%20the%20presence%20or%20absence%20of%20particular%20landmark%20names%20through%20advanced%20text%20recognition%20techniques.%20Our%20approach%20utilizes%20a%20Convolutional%20Neural%20Network%20and%20transfer%20learning%20to%20differentiate%20maps%20from%20non-maps%2C%20verify%20the%20accuracy%20of%20depicted%20regions%2C%20and%20confirm%20landmark%20names%20through%20advanced%20text%20recognition.%20We%20also%20introduce%20the%20VinMap%20dataset%2C%20containing%20annotated%20map%20images%20of%20Vietnam%2C%20to%20train%20and%20test%20our%20method.%20Experiments%20on%20this%20dataset%20demonstrate%20that%20our%20technique%20achieves%20F1-score%20of%2085.51%25%20for%20identifying%20maps%20excluding%20specific%20territorial%20landmarks.%20This%20result%20suggests%20practical%20utility%20and%20indicates%20areas%20for%20future%20improvement.%22%2C%22date%22%3A%222024-8-15%22%2C%22proceedingsTitle%22%3A%222024%20International%20Conference%20on%20Multimedia%20Analysis%20and%20Pattern%20Recognition%20%28MAPR%29%22%2C%22conferenceName%22%3A%222024%20International%20Conference%20on%20Multimedia%20Analysis%20and%20Pattern%20Recognition%20%28MAPR%29%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FMAPR63514.2024.10660742%22%2C%22ISBN%22%3A%22979-8-3503-6843-7%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F10660742%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-12T12%3A17%3A39Z%22%7D%7D%2C%7B%22key%22%3A%223FM693SU%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wang%20et%20al.%22%2C%22parsedDate%22%3A%222023-08-07%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWang%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fica-proc.copernicus.org%5C%2Farticles%5C%2F5%5C%2F25%5C%2F2023%5C%2F%26%23039%3B%26gt%3BRecognition%20and%20Semantic%20Information%20Extraction%20for%20Map%20Based%20on%20Deep%20Learning%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Recognition%20and%20Semantic%20Information%20Extraction%20for%20Map%20Based%20on%20Deep%20Learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yong%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kaixuan%22%2C%22lastName%22%3A%22Du%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xianghong%22%2C%22lastName%22%3A%22Che%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ruiyuan%22%2C%22lastName%22%3A%22Ma%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fu%22%2C%22lastName%22%3A%22Ren%22%7D%5D%2C%22abstractNote%22%3A%22Geospatial%20information%20contained%20in%20maps%20plays%20an%20important%20role%20in%20geographic%20information%20data%20acquisition%2C%20map%20understanding%2C%20intelligent%20mapping%20and%20other%20applications.%20In%20terms%20of%20map%20recognition%20and%20geospatial%20information%20extraction%20from%20maps%2C%20traditional%20methods%20that%20heavily%20rely%20on%20human%20or%20human-computer%20interaction%20for%20semantic%20recognition%20can%20no%20longer%20meet%20the%20real-time%20needs.%20In%20this%20paper%2C%20we%20first%20analysed%20the%20composition%20and%20characteristics%20of%20maps%2C%20and%20then%20systematically%20illustrated%20the%20semantic%20understanding%20methods%20of%20map%20image%20recognition%2C%20target%20detection%20of%20geographic%20features%20and%20semantic%20segmentation%20of%20geographic%20features%20based%20on%20deep%20learning%20architecture%2C%20which%20is%20crucial%20to%20intelligent%20map%20recognition%20and%20mapping.%22%2C%22date%22%3A%222023-08-07%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.5194%5C%2Fica-proc-5-25-2023%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fica-proc.copernicus.org%5C%2Farticles%5C%2F5%5C%2F25%5C%2F2023%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-27T13%3A40%3A20Z%22%7D%7D%2C%7B%22key%22%3A%22VFGFFKZM%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLi%2C%20J.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Frave.ohiolink.edu%5C%2Fetdc%5C%2Fview%3Facc_num%3Dosu1650493323790506%26%23039%3B%26gt%3BComputational%20Cartographic%20Recognition%3A%20Exploring%20the%20Use%20of%20Machine%20Learning%20and%20Other%20Computational%20Approaches%20to%20Map%20Reading%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22thesis%22%2C%22title%22%3A%22Computational%20Cartographic%20Recognition%3A%20Exploring%20the%20Use%20of%20Machine%20Learning%20and%20Other%20Computational%20Approaches%20to%20Map%20Reading%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jialin%22%2C%22lastName%22%3A%22Li%22%7D%5D%2C%22abstractNote%22%3A%22Maps%20play%20an%20important%20role%20in%20providing%20geographic-related%20information%20and%20explanations%20regarding%20topics%20of%20interest.%20Maps%20are%20artifacts%20that%20are%20made%20by%20humans%20and%2C%20more%20importantly%20in%20this%20context%20for%20humans%20to%20read.%20While%20humans%20can%20develop%20map%20reading%20skills%20to%20comprehend%20qualitative%20and%20quantitative%20information%20from%20maps%2C%20can%20computers%20recognize%20information%20from%20maps%20and%20understand%20them%20as%20we%20do%3F%20In%20this%20dissertation%2C%20we%20broadly%20refer%20to%20the%20ability%20of%20computers%20to%20recognize%20the%20information%20on%20map%20images%20as%20computational%20cartographic%20recognition.%20Recent%20advances%20in%20the%20field%20of%20computer%20vision%20have%20shown%20that%20artificial%20intelligence%20and%20machine%20learning%20methods%20can%20be%20used%20to%20successfully%20recognize%20and%20classify%20a%20wide%20range%20of%20images.%20The%20dissertation%20research%20represents%20preliminary%20steps%20toward%20computational%20cartographic%20recognition%2C%20aiming%20to%20explore%20how%20these%20methods%20can%20be%20used%20to%20recognize%20information%20from%20maps.%20There%20are%20three%20research%20objectives%20achieved%20in%20the%20dissertation.%20First%2C%20we%20use%20machine%20learning%20methods%20to%20recognize%20fundamental%20cartographic%20information%20of%20maps%20including%20the%20geographic%20region%20mapped%20and%20projection%20used%20on%20the%20map.%20The%20limits%20of%20the%20methods%20are%20also%20examined%20when%20maps%20are%20presented%20with%20different%20degrees%20of%20distortions.%20Second%2C%20we%20develop%20deep%20learning-based%20models%20to%20recognize%20themes%20from%20map%20titles%20or%20legend%20titles%20of%20choropleth%20maps%20and%20classify%20the%20themes%20based%20on%20their%20semantic%20meanings.%20Themes%20are%20important%20for%20map%20users%20to%20understand%20the%20contents%20on%20maps%20because%20a%20theme%20indicates%20what%20phenomenon%20is%20presented%20on%20a%20map.%20Third%2C%20to%20explore%20whether%20computers%20can%20recognize%20spatial%20patterns%20as%20humans%20do%2C%20we%20develop%20a%20computational%20framework%20to%20recognize%20spatial%20patterns%20on%20choropleth%20maps.%20We%20also%20conduct%20a%20survey%20on%20how%20humans%20read%20spatial%20patterns%20on%20choropleth%20maps%20and%20compare%20the%20survey%20results%20with%20those%20from%20the%20computational%20models.%20The%20results%20for%20the%20three%20research%20objectives%20suggest%20that%20the%20models%20developed%20for%20the%20tasks%20are%20capable%20of%20recognizing%20information%20from%20maps%2C%20but%20there%20are%20also%20limitations%20of%20the%20models.%22%2C%22thesisType%22%3A%22Dissertation%22%2C%22university%22%3A%22The%20Ohio%20State%20University%22%2C%22date%22%3A%222022%22%2C%22language%22%3A%22en%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Frave.ohiolink.edu%5C%2Fetdc%5C%2Fview%3Facc_num%3Dosu1650493323790506%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-07-31T16%3A31%3A35Z%22%7D%7D%2C%7B%22key%22%3A%22XJM44F3C%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Schn%5Cu00fcrer%20et%20al.%22%2C%22parsedDate%22%3A%222021-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSchn%5Cu00fcrer%2C%20R.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F00087041.2020.1738112%26%23039%3B%26gt%3BDetection%20of%20Pictorial%20Map%20Objects%20with%20Convolutional%20Neural%20Networks%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Detection%20of%20Pictorial%20Map%20Objects%20with%20Convolutional%20Neural%20Networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Raimund%22%2C%22lastName%22%3A%22Schn%5Cu00fcrer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ren%5Cu00e9%22%2C%22lastName%22%3A%22Sieber%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jost%22%2C%22lastName%22%3A%22Schmid-Lanter%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22A.%20Cengiz%22%2C%22lastName%22%3A%22%5Cu00d6ztireli%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22In%20this%20work%2C%20realistically%20drawn%20objects%20are%20identified%20on%20digital%20maps%20by%20convolutional%20neural%20networks.%20For%20the%20first%20two%20experiments%2C%206200%20images%20were%20retrieved%20from%20Pinterest.%20While%20alternating%20image%20input%20options%2C%20two%20binary%20classifiers%20based%20on%20Xception%20and%20InceptionResNetV2%20were%20trained%20to%20separate%20maps%20and%20pictorial%20maps.%20Results%20showed%20that%20the%20accuracy%20is%2095%5Cu201397%25%20to%20distinguish%20maps%20from%20other%20images%2C%20whereas%20maps%20with%20pictorial%20objects%20are%20correctly%20classified%20at%20rates%20of%2087%5Cu201392%25.%20For%20a%20third%20experiment%2C%20bounding%20boxes%20of%203200%20sailing%20ships%20were%20annotated%20in%20historic%20maps%20from%20different%20digital%20libraries.%20Faster%20R-CNN%20and%20RetinaNet%20were%20compared%20to%20determine%20the%20box%20coordinates%2C%20while%20adjusting%20anchor%20scales%20and%20examining%20configurations%20for%20small%20objects.%20A%20resulting%20average%20precision%20of%2032%25%20was%20obtained%20for%20Faster%20R-CNN%20and%20of%2036%25%20for%20RetinaNet.%20Research%20outcomes%20are%20relevant%20for%20trawling%20map%20images%20on%20the%20Internet%20and%20for%20enhancing%20the%20advanced%20search%20of%20digital%20map%20catalogues.%22%2C%22date%22%3A%222021-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F00087041.2020.1738112%22%2C%22ISSN%22%3A%220008-7041%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F00087041.2020.1738112%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A38%3A03Z%22%7D%7D%5D%7D
Nguyen, P. et al. Detecting Omissions in Geographic Maps through Computer Vision. 2024
Wang, Y. et al. Recognition and Semantic Information Extraction for Map Based on Deep Learning. 2023
Schnürer, R. et al. Detection of Pictorial Map Objects with Convolutional Neural Networks. 2021
Map Localisation
5447768
map localisation
1
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22FYQ2BURU%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Oh%22%2C%22parsedDate%22%3A%222020-12-31%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BOh%2C%20B.-W.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Fwww.dbpia.co.kr%5C%2FJournal%5C%2FArticleDetail%5C%2FNODE10510391%26%23039%3B%26gt%3BMap%20Detection%20using%20Deep%20Learning%26lt%3B%5C%2Fa%26gt%3B.%202020%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Map%20Detection%20using%20Deep%20Learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Byoung-Woo%22%2C%22lastName%22%3A%22Oh%22%7D%5D%2C%22abstractNote%22%3A%22Recently%2C%20researches%20that%20are%20using%20deep%20learning%20technology%20in%20various%20fields%20are%20being%20conducted.%20The%20fields%20include%20geographic%20map%20processing.%20In%20this%20paper%2C%20I%20propose%20a%20method%20to%20infer%20where%20the%20map%20area%20included%20in%20the%20image%20is.%20The%20proposed%20method%20generates%20and%20learns%20images%20including%20a%20map%2C%20detects%20map%20areas%20from%20input%20images%2C%20extracts%20character%20strings%20belonging%20to%20those%20map%20areas%2C%20and%20converts%20the%20extracted%20character%20strings%20into%20coordinates%20through%20geocoding%20to%20infer%20the%20coordinates%20of%20the%20input%20image.%20Faster%20R-CNN%20was%20used%20for%20learning%20and%20map%20detection.%20In%20the%20experiment%2C%20the%20difference%20between%20the%20center%20coordinate%20of%20the%20map%20on%20the%20test%20image%20and%20the%20center%20coordinate%20of%20the%20detected%20map%20is%20calculated.%20The%20median%20value%20of%20the%20results%20of%20the%20experiment%20is%200.00158%20for%20longitude%20and%200.00090%20for%20latitude.%20In%20terms%20of%20distance%2C%20the%20difference%20is%20141m%20in%20the%20east-west%20direction%20and%20100m%20in%20the%20north-south%20direction.%22%2C%22date%22%3A%222020-12-31%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.14801%5C%2FJAITC.2020.10.2.61%22%2C%22ISSN%22%3A%222234-1072%2C%202234-0963%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fwww.dbpia.co.kr%5C%2FJournal%5C%2FArticleDetail%5C%2FNODE10510391%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-07-31T16%3A32%3A36Z%22%7D%7D%5D%7D
Oh, B.-W. Map Detection using Deep Learning. 2020
Similarity Search
5447768
similarity search
1
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22X7ISIGQ9%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Mahowald%20and%20Lee%22%2C%22parsedDate%22%3A%222025-10-29%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BMahowald%2C%20J.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2510.25718%26%23039%3B%26gt%3BRetrieval-Augmented%20Search%20for%20Large-Scale%20Map%20Collections%20with%20ColPali%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Retrieval-Augmented%20Search%20for%20Large-Scale%20Map%20Collections%20with%20ColPali%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jamie%22%2C%22lastName%22%3A%22Mahowald%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Benjamin%20Charles%20Germain%22%2C%22lastName%22%3A%22Lee%22%7D%5D%2C%22abstractNote%22%3A%22Multimodal%20approaches%20have%20shown%20great%20promise%20for%20searching%20and%20navigating%20digital%20collections%20held%20by%20libraries%2C%20archives%2C%20and%20museums.%20In%20this%20paper%2C%20we%20introduce%20map-RAS%3A%20a%20retrieval-augmented%20search%20system%20for%20historic%20maps.%20In%20addition%20to%20introducing%20our%20framework%2C%20we%20detail%20our%20publicly-hosted%20demo%20for%20searching%20101%2C233%20map%20images%20held%20by%20the%20Library%20of%20Congress.%20With%20our%20system%2C%20users%20can%20multimodally%20query%20the%20map%20collection%20via%20ColPali%2C%20summarize%20search%20results%20using%20Llama%203.2%2C%20and%20upload%20their%20own%20collections%20to%20perform%20inter-collection%20search.%20We%20articulate%20potential%20use%20cases%20for%20archivists%2C%20curators%2C%20and%20end-users%2C%20as%20well%20as%20future%20work%20with%20our%20system%20in%20both%20machine%20learning%20and%20the%20digital%20humanities.%20Our%20demo%20can%20be%20viewed%20at%3A%20http%3A%5C%2F%5C%2Fwww.mapras.com.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2510.25718%22%2C%22date%22%3A%222025-10-29%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2510.25718%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2510.25718%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-03T14%3A08%3A13Z%22%7D%7D%2C%7B%22key%22%3A%22KSNVVTXR%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhou%20et%20al.%22%2C%22parsedDate%22%3A%222025%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZhou%2C%20X.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10095020.2025.2522146%26%23039%3B%26gt%3BIdentifying%20the%20place%20without%20text%20annotations%3A%20an%20assembled%20neural%20network%20framework%20for%20content-based%20raster%20map%20retrieval%20with%20cartographical%20morphological%20pattern%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Identifying%20the%20place%20without%20text%20annotations%3A%20an%20assembled%20neural%20network%20framework%20for%20content-based%20raster%20map%20retrieval%20with%20cartographical%20morphological%20pattern%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiran%22%2C%22lastName%22%3A%22Zhou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yi%22%2C%22lastName%22%3A%22Wen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhenfeng%22%2C%22lastName%22%3A%22Shao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenwen%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guochao%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiao%22%2C%22lastName%22%3A%22Xie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ruoran%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qunshan%22%2C%22lastName%22%3A%22Zhao%22%7D%5D%2C%22abstractNote%22%3A%22Currently%2C%20a%20majority%20of%20maps%20originate%20from%20volunteered%20sources.%20These%20volunteered%20maps%20have%20been%20created%20with%20the%20raster%20data%20structure%20and%20failed%20to%20follow%20the%20professional%20mapping%20principles.%20As%20the%20primary%20map%20languages%2C%20text%20and%20symbol%20annotations%20might%20be%20incorrect%2C%20or%20even%20missing.%20This%20poses%20a%20challenge%20for%20state-of-the-art%20map%20content%20recognition%20approaches%20that%20mainly%20focus%20on%20map%20text%20information.%20Under%20this%20occasion%2C%20graphical%20information%20could%20be%20an%20alternative%20solution%20for%20retrieving%20these%20maps.%20However%2C%20map%20graphs%20might%20significantly%20vary%20and%20overlap%20on%20complex%20backgrounds%20in%20massive%20volunteered%20maps.%20To%20address%20this%20challenge%2C%20we%20propose%20a%20concept%20called%20a%20cartographical%20morphological%20pattern%2C%20and%20an%20assembled%20neural%20network%20framework%20for%20retracing%20raster%20maps%20based%20on%20labeled%20and%20unlabeled%20datasets.%20The%20experiments%20prove%20that%20the%20feature%20maps%20generated%20from%20deep%20learning%20models%20can%20represent%20the%20shape%20characteristics%20of%20the%20target%20place%2C%20and%20our%20proposed%20integrated%20framework%20enables%20effective%20volunteered%20map%20retrieval%20based%20on%20cartographical%20morphological%20patterns.%20We%20hope%20our%20work%20can%20provide%20a%20novel%20strategy%20for%20content-based%20map%20retrieval.%22%2C%22date%22%3A%222025%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F10095020.2025.2522146%22%2C%22ISSN%22%3A%221009-5020%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10095020.2025.2522146%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-07-31T16%3A48%3A23Z%22%7D%7D%2C%7B%22key%22%3A%22KF9UTVNJ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Guo%20et%20al.%22%2C%22parsedDate%22%3A%222024-04-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGuo%2C%20D.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1569843224000979%26%23039%3B%26gt%3BSpatialScene2Vec%3A%20A%20self-supervised%20contrastive%20representation%20learning%20method%20for%20spatial%20scene%20similarity%20evaluation%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22SpatialScene2Vec%3A%20A%20self-supervised%20contrastive%20representation%20learning%20method%20for%20spatial%20scene%20similarity%20evaluation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Danhuai%22%2C%22lastName%22%3A%22Guo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yingxue%22%2C%22lastName%22%3A%22Yu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shiyin%22%2C%22lastName%22%3A%22Ge%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Song%22%2C%22lastName%22%3A%22Gao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gengchen%22%2C%22lastName%22%3A%22Mai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Huixuan%22%2C%22lastName%22%3A%22Chen%22%7D%5D%2C%22abstractNote%22%3A%22Spatial%20scene%20similarity%20plays%20a%20crucial%20role%20in%20spatial%20cognition%2C%20as%20it%20enables%20us%20to%20understand%20and%20compare%20different%20spatial%20scenes%20and%20their%20relationships.%20However%2C%20understanding%20spatial%20scenes%20is%20a%20complex%20task.%20While%20existing%20literature%20has%20contributed%20to%20spatial%20scene%20representation%20learning%2C%20these%20methods%20primarily%20focus%20on%20comprehending%20the%20spatial%20relationships%20among%20objects%2C%20often%20neglecting%20their%20semantic%20features.%20Furthermore%2C%20there%20is%20a%20lack%20of%20scene%20representation%20learning%20methods%20that%20can%20seamlessly%20handle%20different%20types%20of%20spatial%20objects%20%28e.g.%2C%20points%2C%20polylines%2C%20and%20polygons%29%20in%20a%20scene.%20Moreover%2C%20since%20expert%20knowledge%20is%20required%20for%20the%20annotation%20process%20of%20spatial%20scene%20understanding%2C%20publicly%20available%20high-quality%20annotation%20data%20has%20a%20limited%20size%20which%20usually%20leads%20to%20suboptimal%20results.%20To%20address%20these%20issues%2C%20we%20propose%20a%20novel%20multi-scale%20spatial%20scene%20encoding%20model%20called%20SpatialScene2Vec.%20SpatialScene2Vec%20utilizes%20a%20point%20location%20encoder%20to%20seamlessly%20encode%20the%20spatial%20information%20of%20different%20types%20of%20spatial%20objects.%20A%20point%20feature%20encoder%20is%20employed%20to%20encode%20the%20semantic%20features%20of%20these%20objects.%20A%20spatial%20scene%20embedding%20is%20generated%20by%20integrating%20the%20spatial%20embeddings%20and%20feature%20embeddings%20of%20spatial%20objects%20within%20this%20scene.%20Furthermore%2C%20to%20address%20the%20limited%20labeled%20data%20problem%2C%20we%20propose%20a%20self-supervised%20learning%20framework%20to%20train%20the%20SpatialScene2Vec%20model%20in%20which%20a%20contrastive%20loss%20is%20used%20for%20spatial%20scene%20similarity%20evaluation.%20In%20addition%2C%20we%20introduce%20a%20novel%20spatial%20scene%20data%20augmentation%20method%20to%20generate%20positive%20scene%20augmentations%20by%20leveraging%20the%20unique%20characteristics%20of%20spatial%20scenes%20and%20random%20sampling%20points%20based%20on%20the%20shapes%20of%20polyline%5C%2Fpolygon%20objects%20within%20the%20current%20spatial%20scenes.%20We%20conduct%20experiments%20on%20real-world%20datasets%20for%20spatial%20scene%20retrieval%20tasks%2C%20including%20vector%20data%20types%20of%20points%2C%20polylines%2C%20and%20polygons.%20Results%20show%20that%20SpatialScene2Vec%20outperforms%20well-established%20encoding%20methods%20such%20as%20Space2Vec%20due%20to%20the%20advantages%20of%20the%20integrated%20multi-scale%20representations%20and%20the%20proposed%20spatial%20scene%20data%20augmentation%20method%2C%20with%20significant%20improvements%20and%20robustness.%22%2C%22date%22%3A%222024-04-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.jag.2024.103743%22%2C%22ISSN%22%3A%221569-8432%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1569843224000979%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-19T19%3A16%3A01Z%22%7D%7D%2C%7B%22key%22%3A%2297UEJH67%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Petitpierre%20et%20al.%22%2C%22parsedDate%22%3A%222024-03-04%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BPetitpierre%2C%20R.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.nature.com%5C%2Farticles%5C%2Fs41599-024-02840-w%26%23039%3B%26gt%3BA%20fragment-based%20approach%20for%20computing%20the%20long-term%20visual%20evolution%20of%20historical%20maps%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20fragment-based%20approach%20for%20computing%20the%20long-term%20visual%20evolution%20of%20historical%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Remi%22%2C%22lastName%22%3A%22Petitpierre%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Johannes%20H.%22%2C%22lastName%22%3A%22Uhl%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Isabella%22%2C%22lastName%22%3A%22Di%20Lenardo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fr%5Cu00e9d%5Cu00e9ric%22%2C%22lastName%22%3A%22Kaplan%22%7D%5D%2C%22abstractNote%22%3A%22Cartography%2C%20as%20a%20strategic%20technology%2C%20is%20a%20historical%20marker.%20Maps%20are%20tightly%20connected%20to%20the%20cultural%20construction%20of%20the%20environment.%20The%20increasing%20availability%20of%20digital%20collections%20of%20historical%20map%20images%20provides%20an%20unprecedented%20opportunity%20to%20study%20large%20map%20corpora.%20Corpus%20linguistics%20has%20led%20to%20significant%20advances%20in%20understanding%20how%20languages%20change.%20Research%20on%20large%20map%20corpora%20could%20in%20turn%20significantly%20contribute%20to%20understanding%20cultural%20and%20historical%20changes.%20We%20develop%20a%20methodology%20for%20cartographic%20stylometry%2C%20with%20an%20approach%20inspired%20by%20structuralist%20linguistics%2C%20considering%20maps%20as%20visual%20language%20systems.%20As%20a%20case%20study%2C%20we%20focus%20on%20a%20corpus%20of%2010%2C000%20French%20and%20Swiss%20maps%2C%20published%20between%201600%20and%201950.%20Our%20method%20is%20based%20on%20the%20fragmentation%20of%20the%20map%20image%20into%20elementary%20map%20units.%20A%20fully%20interpretable%20feature%20representation%20of%20these%20units%20is%20computed%20by%20contrasting%20maps%20from%20different%2C%20coherent%20cartographic%20series%2C%20based%20on%20a%20set%20of%20candidate%20visual%20features%20%28texture%2C%20morphology%2C%20graphical%20load%29.%20The%20resulting%20representation%20effectively%20distinguishes%20between%20map%20series%2C%20enabling%20the%20elementary%20units%20to%20be%20grouped%20into%20types%2C%20whose%20distribution%20can%20be%20examined%20over%20350%20years.%20The%20results%20show%20that%20the%20analyzed%20maps%20underwent%20a%20steady%20abstraction%20process%20during%20the%2017th%20and%2018th%20centuries.%20The%2019th%20century%20brought%20a%20lasting%20scission%20between%20small-%20and%20large-scale%20maps.%20Macroscopic%20trends%20are%20also%20highlighted%2C%20such%20as%20a%20surge%20in%20the%20production%20of%20fine%20lines%2C%20and%20an%20increase%20in%20map%20load%2C%20that%20reveal%20cultural%20fashion%20processes%20and%20shifts%20in%20mapping%20practices.%20This%20initial%20research%20demonstrates%20how%20cartographic%20stylometry%20can%20be%20used%20for%20exploratory%20research%20on%20visual%20languages%20and%20cultural%20evolution%20in%20large%20map%20corpora%2C%20opening%20an%20effective%20dialogue%20with%20the%20history%20of%20cartography.%20It%20also%20deepens%20the%20understanding%20of%20cartography%20by%20revealing%20macroscopic%20phenomena%20over%20the%20long%20term.%22%2C%22date%22%3A%222024-03-04%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1057%5C%2Fs41599-024-02840-w%22%2C%22ISSN%22%3A%222662-9992%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.nature.com%5C%2Farticles%5C%2Fs41599-024-02840-w%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-12T12%3A18%3A36Z%22%7D%7D%2C%7B%22key%22%3A%228KQBF4VN%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Klasen%20et%20al.%22%2C%22parsedDate%22%3A%222023-05-04%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKlasen%2C%20V.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F23729333.2022.2156316%26%23039%3B%26gt%3BHow%20we%20see%20time%20%5Cu2013%20the%20evolution%20and%20current%20state%20of%20visualizations%20of%20temporal%20data%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22How%20we%20see%20time%20%5Cu2013%20the%20evolution%20and%20current%20state%20of%20visualizations%20of%20temporal%20data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Verena%22%2C%22lastName%22%3A%22Klasen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Edyta%20P.%22%2C%22lastName%22%3A%22Bogucka%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Liqiu%22%2C%22lastName%22%3A%22Meng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jukka%20M.%22%2C%22lastName%22%3A%22Krisp%22%7D%5D%2C%22abstractNote%22%3A%22Time%2C%20much%20like%20space%2C%20has%20always%20influenced%20the%20human%20experience%20due%20to%20its%20ubiquity.%20Yet%2C%20how%20we%20have%20communicated%20temporal%20information%20graphically%20throughout%20our%20history%2C%20is%20still%20inadequately%20studied.%20How%20does%20our%20image%20of%20time%20and%20temporal%20events%20evolve%20as%20the%20human%20world%20continuously%20transforms%20into%20a%20globally%20more%20and%20more%20synchronized%20community%3F%20Within%20this%20overview%20paper%2C%20we%20elaborate%20on%20these%20questions%2C%20we%20analyze%20visualizations%20of%20time%20and%20temporal%20data%20from%20a%20variety%20of%20sources%20connected%20to%20exploratory%20data%20analysis.%20We%20assign%20codes%20and%20cluster%20the%20visualizations%20based%20on%20their%20graphical%20properties.%20The%20result%20gives%20an%20overview%20of%20different%20visual%20structures%20apparent%20in%20graphic%20representations%20of%20time.%22%2C%22date%22%3A%222023-05-04%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F23729333.2022.2156316%22%2C%22ISSN%22%3A%222372-9333%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F23729333.2022.2156316%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-18T17%3A14%3A05Z%22%7D%7D%2C%7B%22key%22%3A%22QNAW2GME%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Annanias%20et%20al.%22%2C%22parsedDate%22%3A%222023%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BAnnanias%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdrops.dagstuhl.de%5C%2Fentities%5C%2Fdocument%5C%2F10.4230%5C%2FLIPIcs.GIScience.2023.14%26%23039%3B%26gt%3BDevelopment%20of%20a%20Semantic%20Segmentation%20Approach%20to%20Old-Map%20Comparison%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Development%20of%20a%20Semantic%20Segmentation%20Approach%20to%20Old-Map%20Comparison%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yves%22%2C%22lastName%22%3A%22Annanias%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Daniel%22%2C%22lastName%22%3A%22Wiegreffe%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Andreas%22%2C%22lastName%22%3A%22Niekler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Marta%22%2C%22lastName%22%3A%22Ku%5Cu017ama%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Francis%22%2C%22lastName%22%3A%22Harvey%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Roger%22%2C%22lastName%22%3A%22Beecham%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Jed%20A.%22%2C%22lastName%22%3A%22Long%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Dianna%22%2C%22lastName%22%3A%22Smith%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Qunshan%22%2C%22lastName%22%3A%22Zhao%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Sarah%22%2C%22lastName%22%3A%22Wise%22%7D%5D%2C%22abstractNote%22%3A%22This%20paper%20describes%20an%20innovative%20computational%20approach%20for%20comparing%20old%20maps.%20Maps%20older%20than%2020%20years%20remain%20a%20vast%20treasure%20of%20geographic%20information%20in%20many%20parts%20of%20the%20world%20with%20potential%20applications%20in%20many%20environmental%20and%20social%20analyses%2C%20e.g.%2C%20establishing%20road%20construction%20over%20the%20past%2080%20years%20or%20identifying%20settlement%20growth%20since%20the%20middle%20ages.%20Semantic%20segmentation%20has%20developed%20into%20a%20viable%20computational%20method%20for%20analysing%20old%20maps%20from%20previous%20centuries.%20It%20allows%20for%20the%20discrete%20identification%20of%20elements%2C%20e.g.%2C%20lakes%2C%20forests%2C%20and%20roads%2C%20from%20cartographic%20sources%20and%20their%20computational%20modelling.%20Semantic%20segmentation%20uses%20convolutional%20neural%20networks%20to%20extract%20elements.%20With%20this%20technique%2C%20we%20create%20a%20computational%20approach%20to%20compare%20old%20maps%20systematically%20and%20efficiently.%22%2C%22date%22%3A%222023%22%2C%22proceedingsTitle%22%3A%2212th%20International%20Conference%20on%20Geographic%20Information%20Science%20%28GIScience%202023%29%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.4230%5C%2FLIPIcs.GIScience.2023.14%22%2C%22ISBN%22%3A%22978-3-95977-288-4%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdrops.dagstuhl.de%5C%2Fentities%5C%2Fdocument%5C%2F10.4230%5C%2FLIPIcs.GIScience.2023.14%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-12T12%3A21%3A33Z%22%7D%7D%2C%7B%22key%22%3A%22F6X45ECS%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Guo%20et%20al.%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGuo%2C%20D.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1111%5C%2Ftgis.12915%26%23039%3B%26gt%3BDeepSSN%3A%20A%20deep%20convolutional%20neural%20network%20to%20assess%20spatial%20scene%20similarity%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22DeepSSN%3A%20A%20deep%20convolutional%20neural%20network%20to%20assess%20spatial%20scene%20similarity%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Danhuai%22%2C%22lastName%22%3A%22Guo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shiyin%22%2C%22lastName%22%3A%22Ge%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shu%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Song%22%2C%22lastName%22%3A%22Gao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ran%22%2C%22lastName%22%3A%22Tao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yangang%22%2C%22lastName%22%3A%22Wang%22%7D%5D%2C%22abstractNote%22%3A%22Spatial-query-by-sketch%20is%20an%20intuitive%20tool%20to%20explore%20human%20spatial%20knowledge%20about%20geographic%20environments%20and%20to%20support%20communication%20with%20scene%20database%20queries.%20However%2C%20traditional%20sketch-based%20spatial%20search%20methods%20perform%20inadequately%20due%20to%20their%20inability%20to%20find%20hidden%20multiscale%20map%20features%20from%20mental%20sketches.%20In%20this%20research%2C%20we%20propose%20a%20deep%20convolutional%20neural%20network%2C%20namely%20the%20Deep%20Spatial%20Scene%20Network%20%28DeepSSN%29%2C%20to%20better%20assess%20the%20spatial%20scene%20similarity.%20In%20DeepSSN%2C%20a%20triplet%20loss%20function%20is%20designed%20as%20a%20comprehensive%20distance%20metric%20to%20support%20the%20similarity%20assessment.%20A%20positive%20and%20negative%20example%20mining%20strategy%20is%20designed%20to%20ensure%20a%20consistently%20increasing%20distinction%20of%20triplets%20during%20the%20training%20process.%20Moreover%2C%20we%20develop%20a%20prototype%20spatial%20scene%20search%20system%20using%20the%20proposed%20DeepSSN%2C%20in%20which%20the%20users%20input%20spatial%20queries%20via%20sketch%20maps%20and%20the%20system%20can%20automatically%20augment%20the%20sketch%20training%20data.%20The%20proposed%20model%20is%20validated%20using%20multisource%20conflated%20map%20data%20including%20131%2C300%20labeled%20scene%20samples%20after%20data%20augmentation.%20The%20empirical%20results%20demonstrate%20that%20the%20DeepSSN%20outperforms%20baseline%20methods%20including%20k-nearest%20neighbors%2C%20the%20multilayer%20perceptron%2C%20AlexNet%2C%20DenseNet%2C%20and%20ResNet%20using%20mean%20reciprocal%20rank%20and%20precision%20metrics.%20This%20research%20advances%20geographic%20information%20retrieval%20studies%20by%20introducing%20a%20novel%20deep%20learning%20method%20tailored%20to%20spatial%20scene%20queries.%22%2C%22date%22%3A%222022%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1111%5C%2Ftgis.12915%22%2C%22ISSN%22%3A%221467-9671%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1111%5C%2Ftgis.12915%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A57%3A58Z%22%7D%7D%2C%7B%22key%22%3A%2265DFP74U%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhao%20et%20al.%22%2C%22parsedDate%22%3A%222021-07-04%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZhao%2C%20B.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2021.1910075%26%23039%3B%26gt%3BDeep%20fake%20geography%3F%20When%20geospatial%20data%20encounter%20Artificial%20Intelligence%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep%20fake%20geography%3F%20When%20geospatial%20data%20encounter%20Artificial%20Intelligence%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bo%22%2C%22lastName%22%3A%22Zhao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shaozeng%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chunxue%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yifan%22%2C%22lastName%22%3A%22Sun%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chengbin%22%2C%22lastName%22%3A%22Deng%22%7D%5D%2C%22abstractNote%22%3A%22The%20developing%20convergence%20of%20Artificial%20Intelligence%20and%20GIScience%20has%20raised%20a%20concern%20on%20the%20emergence%20of%20deep%20fake%20geography%20and%20its%20potentials%20in%20transforming%20human%20perception%20of%20the%20geographic%20world.%20Situating%20fake%20geography%20under%20the%20context%20of%20modern%20cartography%20and%20GIScience%2C%20this%20paper%20presents%20an%20empirical%20study%20to%20dissect%20the%20algorithmic%20mechanism%20of%20falsifying%20satellite%20images%20with%20non-existent%20landscape%20features.%20To%20demonstrate%20our%20pioneering%20attempt%20at%20deep%20fake%20detection%2C%20a%20robust%20approach%20is%20then%20proposed%20and%20evaluated.%20Our%20proactive%20study%20warns%20of%20the%20emergence%20and%20proliferation%20of%20deep%20fakes%20in%20geography%20just%20as%20%5Cu201clies%5Cu201d%20in%20maps.%20We%20suggest%20timely%20detections%20of%20deep%20fakes%20in%20geospatial%20data%20and%20proper%20coping%20strategies%20when%20necessary.%20More%20importantly%2C%20it%20is%20encouraged%20to%20cultivate%20a%20critical%20geospatial%20data%20literacy%20and%20thus%20to%20understand%20the%20multi-faceted%20impacts%20of%20deep%20fake%20geography%20on%20individuals%20and%20human%20society.%22%2C%22date%22%3A%222021-07-04%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2021.1910075%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2021.1910075%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-19T19%3A00%3A34Z%22%7D%7D%2C%7B%22key%22%3A%227PME8L7S%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Dobesova%22%2C%22parsedDate%22%3A%222020-06%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BDobesova%2C%20Z.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F9%5C%2F6%5C%2F406%26%23039%3B%26gt%3BExperiment%20in%20Finding%20Look-Alike%20European%20Cities%20Using%20Urban%20Atlas%20Data%26lt%3B%5C%2Fa%26gt%3B.%202020%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Experiment%20in%20Finding%20Look-Alike%20European%20Cities%20Using%20Urban%20Atlas%20Data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zdena%22%2C%22lastName%22%3A%22Dobesova%22%7D%5D%2C%22abstractNote%22%3A%22The%20integration%20of%20geography%20and%20machine%20learning%20can%20produce%20novel%20approaches%20in%20addressing%20a%20variety%20of%20problems%20occurring%20in%20natural%20and%20human%20environments.%20This%20article%20presents%20an%20experiment%20that%20identifies%20cities%20that%20are%20similar%20according%20to%20their%20land%20use%20data.%20The%20article%20presents%20interesting%20preliminary%20experiments%20with%20screenshots%20of%20maps%20from%20the%20Czech%20map%20portal.%20After%20successfully%20working%20with%20the%20map%20samples%2C%20the%20study%20focuses%20on%20identifying%20cities%20with%20similar%20land%20use%20structures.%20The%20Copernicus%20European%20Urban%20Atlas%202012%20was%20used%20as%20a%20source%20dataset%20%28data%20valid%20years%202015%5Cu20132018%29.%20The%20Urban%20Atlas%20freely%20offers%20land%20use%20datasets%20of%20nearly%20800%20functional%20urban%20areas%20in%20Europe.%20To%20search%20for%20similar%20cities%2C%20a%20set%20of%20maps%20detailing%20land%20use%20in%20European%20cities%20was%20prepared%20in%20ArcGIS.%20A%20vector%20of%20image%20descriptors%20for%20each%20map%20was%20subsequently%20produced%20using%20a%20pre-trained%20neural%20network%2C%20known%20as%20Painters%2C%20in%20Orange%20software.%20As%20a%20typical%20data%20mining%20task%2C%20the%20nearest%20neighbor%20function%20analyzes%20these%20descriptors%20according%20to%20land%20use%20patterns%20to%20find%20look-alike%20cities.%20Example%20city%20pairs%20based%20on%20land%20use%20are%20also%20presented%20in%20this%20article.%20The%20research%20question%20is%20whether%20the%20existing%20pre-trained%20neural%20network%20outside%20cartography%20is%20applicable%20for%20categorization%20of%20some%20thematic%20maps%20with%20data%20mining%20tasks%20such%20as%20clustering%2C%20similarity%2C%20and%20finding%20the%20nearest%20neighbor.%20The%20article%5Cu2019s%20contribution%20is%20a%20presentation%20of%20one%20possible%20method%20to%20find%20cities%20similar%20to%20each%20other%20according%20to%20their%20land%20use%20patterns%2C%20structures%2C%20and%20shapes.%20Some%20of%20the%20findings%20were%20surprising%2C%20and%20without%20machine%20learning%2C%20could%20not%20have%20been%20evident%20through%20human%20visual%20investigation%20alone.%22%2C%22date%22%3A%222020%5C%2F6%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi9060406%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F9%5C%2F6%5C%2F406%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A48%3A58Z%22%7D%7D%5D%7D
Mahowald, J. et al. Retrieval-Augmented Search for Large-Scale Map Collections with ColPali. 2025
Petitpierre, R. et al. A fragment-based approach for computing the long-term visual evolution of historical maps. 2024
Klasen, V. et al. How we see time – the evolution and current state of visualizations of temporal data. 2023
Annanias, Y. et al. Development of a Semantic Segmentation Approach to Old-Map Comparison. 2023
Guo, D. et al. DeepSSN: A deep convolutional neural network to assess spatial scene similarity. 2022
Zhao, B. et al. Deep fake geography? When geospatial data encounter Artificial Intelligence. 2021
Dobesova, Z. Experiment in Finding Look-Alike European Cities Using Urban Atlas Data. 2020
Feature Extraction (Symbols)
5447768
feature extraction, symbols
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22AEHWDE4N%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Pai%20et%20al.%22%2C%22parsedDate%22%3A%222025-12-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BPai%2C%20P.-L.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Facademic.oup.com%5C%2Fdsh%5C%2Farticle%5C%2F40%5C%2F4%5C%2F1243%5C%2F8267105%26%23039%3B%26gt%3BApplication%20of%20deep%20learning%20for%20symbol%20detection%20on%20historical%20maps%20to%20explore%20spatiotemporal%20changes%20in%20the%20regional%20tea%20industry%20of%20early%2020th-century%20Taiwan%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Application%20of%20deep%20learning%20for%20symbol%20detection%20on%20historical%20maps%20to%20explore%20spatiotemporal%20changes%20in%20the%20regional%20tea%20industry%20of%20early%2020th-century%20Taiwan%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pi-Ling%22%2C%22lastName%22%3A%22Pai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chan-Yu%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chiao-Ling%22%2C%22lastName%22%3A%22Kuo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ta-Chien%22%2C%22lastName%22%3A%22Chan%22%7D%5D%2C%22abstractNote%22%3A%22Focusing%20on%20the%20early%20history%20of%20the%20tea%20industry%20in%20Tamsui%2C%20Taiwan%2C%20this%20study%20uses%20land%20used%20data%20extracted%20from%20maps%20of%20the%201900s%20and%201920s%20to%20explore%20the%20regional%20characteristics%20and%20changes%20in%20the%20distribution%20of%20tea%20plantations.%20Map%20symbol%20detection%20modeling%20was%20performed%20using%20Artificial%20Intelligence%20deep%20learning%20techniques%2C%20which%20have%20been%20growing%20in%20the%20field%20of%20map%20research%20in%20recent%20years.%20Through%20the%20constructed%20symbol%20detection%20model%2C%20the%20land%20use%20annotation%20data%20of%20historical%20maps%20can%20be%20automatically%20retrieved%20for%20GIS-based%20spatiotemporal%20analysis.%20Thus%2C%20the%20study%20presents%20the%20impact%20of%20global%20economic%20panic%20and%20the%20failure%20of%20tea%20exportation%20in%20the%201920s%20on%20the%20local%20tea%20industry%20and%20reflects%20the%20tea%20plantation%20landscape%20in%20response%20strategies.%22%2C%22date%22%3A%222025-12-01%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1093%5C%2Fllc%5C%2Ffqaf099%22%2C%22ISSN%22%3A%222055-7671%2C%202055-768X%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Facademic.oup.com%5C%2Fdsh%5C%2Farticle%5C%2F40%5C%2F4%5C%2F1243%5C%2F8267105%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-12T12%3A12%3A30Z%22%7D%7D%2C%7B%22key%22%3A%223AEBC7WA%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Duan%20et%20al.%22%2C%22parsedDate%22%3A%222025-06-19%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BDuan%2C%20W.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2506.16006%26%23039%3B%26gt%3BDIGMAPPER%3A%20A%20Modular%20System%20for%20Automated%20Geologic%20Map%20Digitization%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22DIGMAPPER%3A%20A%20Modular%20System%20for%20Automated%20Geologic%20Map%20Digitization%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weiwei%22%2C%22lastName%22%3A%22Duan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%20P.%22%2C%22lastName%22%3A%22Gerlek%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Steven%20N.%22%2C%22lastName%22%3A%22Minton%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Craig%20A.%22%2C%22lastName%22%3A%22Knoblock%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fandel%22%2C%22lastName%22%3A%22Lin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Theresa%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Leeje%22%2C%22lastName%22%3A%22Jang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sofia%22%2C%22lastName%22%3A%22Kirsanova%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zekun%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yijun%22%2C%22lastName%22%3A%22Lin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yao-Yi%22%2C%22lastName%22%3A%22Chiang%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20geologic%20maps%20contain%20rich%20geospatial%20information%2C%20such%20as%20rock%20units%2C%20faults%2C%20folds%2C%20and%20bedding%20planes%2C%20that%20is%20critical%20for%20assessing%20mineral%20resources%20essential%20to%20renewable%20energy%2C%20electric%20vehicles%2C%20and%20national%20security.%20However%2C%20digitizing%20maps%20remains%20a%20labor-intensive%20and%20time-consuming%20task.%20We%20present%20DIGMAPPER%2C%20a%20modular%2C%20scalable%20system%20developed%20in%20collaboration%20with%20the%20United%20States%20Geological%20Survey%20%28USGS%29%20to%20automate%20the%20digitization%20of%20geologic%20maps.%20DIGMAPPER%20features%20a%20fully%20dockerized%2C%20workflow-orchestrated%20architecture%20that%20integrates%20state-of-the-art%20deep%20learning%20models%20for%20map%20layout%20analysis%2C%20feature%20extraction%2C%20and%20georeferencing.%20To%20overcome%20challenges%20such%20as%20limited%20training%20data%20and%20complex%20visual%20content%2C%20our%20system%20employs%20innovative%20techniques%2C%20including%20in-context%20learning%20with%20large%20language%20models%2C%20synthetic%20data%20generation%2C%20and%20transformer-based%20models.%20Evaluations%20on%20over%20100%20annotated%20maps%20from%20the%20DARPA-USGS%20dataset%20demonstrate%20high%20accuracy%20across%20polygon%2C%20line%2C%20and%20point%20feature%20extraction%2C%20and%20reliable%20georeferencing%20performance.%20Deployed%20at%20USGS%2C%20DIGMAPPER%20significantly%20accelerates%20the%20creation%20of%20analysis-ready%20geospatial%20datasets%2C%20supporting%20national-scale%20critical%20mineral%20assessments%20and%20broader%20geoscientific%20applications.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2506.16006%22%2C%22date%22%3A%222025-06-19%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2506.16006%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2506.16006%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-10T19%3A42%3A19Z%22%7D%7D%2C%7B%22key%22%3A%229B3AB5DS%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Smith%20et%20al.%22%2C%22parsedDate%22%3A%222025-01-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSmith%2C%20E.S.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0198971524001480%26%23039%3B%26gt%3BEstimating%20the%20density%20of%20urban%20trees%20in%201890s%20Leeds%20and%20Edinburgh%20using%20object%20detection%20on%20historical%20maps%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Estimating%20the%20density%20of%20urban%20trees%20in%201890s%20Leeds%20and%20Edinburgh%20using%20object%20detection%20on%20historical%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Eleanor%20S.%22%2C%22lastName%22%3A%22Smith%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Christopher%22%2C%22lastName%22%3A%22Fleet%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stuart%22%2C%22lastName%22%3A%22King%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22William%22%2C%22lastName%22%3A%22Mackaness%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hannah%22%2C%22lastName%22%3A%22Walker%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Catherine%20E.%22%2C%22lastName%22%3A%22Scott%22%7D%5D%2C%22abstractNote%22%3A%22We%20present%20a%20new%20end-to-end%20methodology%20for%20extracting%20symbols%20from%20historical%20maps%20and%20demonstrate%20an%20application%20of%20the%20method%20to%20extract%20details%20of%20the%20urban%20forests%20of%20Leeds%20and%20Edinburgh%20in%20the%20UK%20using%20Ordnance%20Survey%20maps%20from%20the%201890s.%20The%20methods%20presented%20allow%20tree%20symbols%20on%201%3A500%20scale%20maps%20to%20be%20efficiently%20extracted%2C%20with%20our%20object%20detection%20model%20achieving%20an%20F1-score%20of%200.945.%20The%20results%20for%20each%20city%20are%20presented%20on%20the%20National%20Library%20of%20Scotland%20website%20and%20have%20been%20used%20to%20generate%20an%20estimate%20of%2037%5Cu00a0%5Cu00b1%5Cu00a01%20tree%20symbols%20per%20hectare%20for%20Leeds%20in%201888%5Cu201390%20and%2040%5Cu00a0%5Cu00b1%5Cu00a01%20tree%20symbols%20per%20hectare%20for%20Edinburgh%20in%201893%5Cu201394.%20This%20is%20the%20first%20time%20that%20quantitative%20data%20has%20been%20obtained%20for%20historical%20urban%20tree%20counts%20in%20these%20two%20cities.%20The%20method%20presented%20can%20be%20expanded%20to%20other%20UK%20towns%20and%20cities%20and%20is%20a%20valuable%20tool%20for%20learning%20about%20the%20past%2C%20and%20changes%20to%20both%20the%20natural%20and%20built%20environment%20over%20time%2C%20aiding%20decisions%20on%20future%20tree%20planting.%20We%20discuss%20the%20process%20used%20to%20automate%20the%20generation%20of%20training%20data%20and%20to%20train%20a%20machine%20learning%20model%20to%20extract%20the%20symbols%2C%20comparing%20it%20with%20other%20possible%20models.%20This%20discussion%20provides%20context%20on%20how%20best%20to%20tackle%20similar%20problems%20of%20symbol%20extraction%20from%20historical%20maps%20and%20the%20issues%20that%20may%20arise%20in%20such%20automated%20analysis%2C%20alongside%20factors%20that%20must%20be%20considered%20when%20using%20historical%20maps%20as%20a%20data%20source.%22%2C%22date%22%3A%222025-01-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.compenvurbsys.2024.102219%22%2C%22ISSN%22%3A%220198-9715%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0198971524001480%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-28T18%3A43%3A00Z%22%7D%7D%2C%7B%22key%22%3A%225TLLTYUZ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Saxton%20et%20al.%22%2C%22parsedDate%22%3A%222024-11%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSaxton%2C%20A.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2076-3263%5C%2F14%5C%2F11%5C%2F305%26%23039%3B%26gt%3BAccurate%20Feature%20Extraction%20from%20Historical%20Geologic%20Maps%20Using%20Open-Set%20Segmentation%20and%20Detection%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Accurate%20Feature%20Extraction%20from%20Historical%20Geologic%20Maps%20Using%20Open-Set%20Segmentation%20and%20Detection%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Aaron%22%2C%22lastName%22%3A%22Saxton%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiahua%22%2C%22lastName%22%3A%22Dong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Albert%22%2C%22lastName%22%3A%22Bode%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nattapon%22%2C%22lastName%22%3A%22Jaroenchai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rob%22%2C%22lastName%22%3A%22Kooper%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiyue%22%2C%22lastName%22%3A%22Zhu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dou%20Hoon%22%2C%22lastName%22%3A%22Kwark%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22William%22%2C%22lastName%22%3A%22Kramer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Volodymyr%22%2C%22lastName%22%3A%22Kindratenko%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shirui%22%2C%22lastName%22%3A%22Luo%22%7D%5D%2C%22abstractNote%22%3A%22This%20study%20presents%20a%20novel%20AI%20method%20for%20extracting%20polygon%20and%20point%20features%20from%20historical%20geologic%20maps%2C%20representing%20a%20pivotal%20step%20for%20assessing%20the%20mineral%20resources%20needed%20for%20energy%20transition.%20Our%20innovative%20method%20involves%20using%20map%20units%20in%20the%20legends%20as%20prompts%20for%20one-shot%20segmentation%20and%20detection%20in%20geological%20feature%20extraction.%20The%20model%2C%20integrated%20with%20a%20human-in-the-loop%20system%2C%20enables%20geologists%20to%20refine%20results%20efficiently%2C%20combining%20the%20power%20of%20AI%20with%20expert%20oversight.%20Tested%20on%20geologic%20maps%20annotated%20by%20USGS%20and%20DARPA%20for%20the%20AI4CMA%20DARPA%20Challenge%2C%20our%20approach%20achieved%20a%20median%20F1%20score%20of%200.91%20for%20polygon%20feature%20segmentation%20and%200.73%20for%20point%20feature%20detection%20when%20such%20features%20had%20abundant%20annotated%20data%2C%20outperforming%20current%20benchmarks.%20By%20efficiently%20and%20accurately%20digitizing%20historical%20geologic%20map%2C%20our%20method%20promises%20to%20provide%20crucial%20insights%20for%20responsible%20policymaking%20and%20effective%20resource%20management%20in%20the%20global%20energy%20transition.%22%2C%22date%22%3A%222024%5C%2F11%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fgeosciences14110305%22%2C%22ISSN%22%3A%222076-3263%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2076-3263%5C%2F14%5C%2F11%5C%2F305%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-01-08T11%3A24%3A05Z%22%7D%7D%2C%7B%22key%22%3A%22TSJQZQZC%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Berganzo-Besga%20et%20al.%22%2C%22parsedDate%22%3A%222023-07-12%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBerganzo-Besga%2C%20I.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.nature.com%5C%2Farticles%5C%2Fs41598-023-38190-x%26%23039%3B%26gt%3BCurriculum%20learning-based%20strategy%20for%20low-density%20archaeological%20mound%20detection%20from%20historical%20maps%20in%20India%20and%20Pakistan%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Curriculum%20learning-based%20strategy%20for%20low-density%20archaeological%20mound%20detection%20from%20historical%20maps%20in%20India%20and%20Pakistan%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Iban%22%2C%22lastName%22%3A%22Berganzo-Besga%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hector%20A.%22%2C%22lastName%22%3A%22Orengo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Felipe%22%2C%22lastName%22%3A%22Lumbreras%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Aftab%22%2C%22lastName%22%3A%22Alam%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rosie%22%2C%22lastName%22%3A%22Campbell%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Petrus%20J.%22%2C%22lastName%22%3A%22Gerrits%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jonas%20Gregorio%22%2C%22lastName%22%3A%22De%20Souza%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Afifa%22%2C%22lastName%22%3A%22Khan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mar%5Cu00eda%22%2C%22lastName%22%3A%22Su%5Cu00e1rez-Moreno%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jack%22%2C%22lastName%22%3A%22Tomaney%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rebecca%20C.%22%2C%22lastName%22%3A%22Roberts%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cameron%20A.%22%2C%22lastName%22%3A%22Petrie%22%7D%5D%2C%22abstractNote%22%3A%22This%20paper%20presents%20two%20algorithms%20for%20the%20large-scale%20automatic%20detection%20and%20instance%20segmentation%20of%20potential%20archaeological%20mounds%20on%20historical%20maps.%20Historical%20maps%20present%20a%20unique%20source%20of%20information%20for%20the%20reconstruction%20of%20ancient%20landscapes.%20The%20last%20100%5Cu00a0years%20have%20seen%20unprecedented%20landscape%20modifications%20with%20the%20introduction%20and%20large-scale%20implementation%20of%20mechanised%20agriculture%2C%20channel-based%20irrigation%20schemes%2C%20and%20urban%20expansion%20to%20name%20but%20a%20few.%20Historical%20maps%20offer%20a%20window%20onto%20disappearing%20landscapes%20where%20many%20historical%20and%20archaeological%20elements%20that%20no%20longer%20exist%20today%20are%20depicted.%20The%20algorithms%20focus%20on%20the%20detection%20and%20shape%20extraction%20of%20mound%20features%20with%20high%20probability%20of%20being%20archaeological%20settlements%2C%20mounds%20being%20one%20of%20the%20most%20commonly%20documented%20archaeological%20features%20to%20be%20found%20in%20the%20Survey%20of%20India%20historical%20map%20series%2C%20although%20not%20necessarily%20recognised%20as%20such%20at%20the%20time%20of%20surveying.%20Mound%20features%20with%20high%20archaeological%20potential%20are%20most%20commonly%20depicted%20through%20hachures%20or%20contour-equivalent%20form-lines%2C%20therefore%2C%20an%20algorithm%20has%20been%20designed%20to%20detect%20each%20of%20those%20features.%20Our%20proposed%20approach%20addresses%20two%20of%20the%20most%20common%20issues%20in%20archaeological%20automated%20survey%2C%20the%20low-density%20of%20archaeological%20features%20to%20be%20detected%2C%20and%20the%20small%20amount%20of%20training%20data%20available.%20It%20has%20been%20applied%20to%20all%20types%20of%20maps%20available%20of%20the%20historic%201%5Cu2033%20to%201-mile%20series%2C%20thus%20increasing%20the%20complexity%20of%20the%20detection.%20Moreover%2C%20the%20inclusion%20of%20synthetic%20data%2C%20along%20with%20a%20Curriculum%20Learning%20strategy%2C%20has%20allowed%20the%20algorithm%20to%20better%20understand%20what%20the%20mound%20features%20look%20like.%20Likewise%2C%20a%20series%20of%20filters%20based%20on%20topographic%20setting%2C%20form%2C%20and%20size%20have%20been%20applied%20to%20improve%20the%20accuracy%20of%20the%20models.%20The%20resulting%20algorithms%20have%20a%20recall%20value%20of%2052.61%25%20and%20a%20precision%20of%2082.31%25%20for%20the%20hachure%20mounds%2C%20and%20a%20recall%20value%20of%2070.80%25%20and%20a%20precision%20of%2070.29%25%20for%20the%20form-line%20mounds%2C%20which%20allowed%20the%20detection%20of%20nearly%206000%20mound%20features%20over%20an%20area%20of%20470%2C500%5Cu00a0km%5Cu00b2%2C%20the%20largest%20such%20approach%20to%20have%20ever%20been%20applied.%20If%20we%20restrict%20our%20focus%20to%20the%20maps%20most%20similar%20to%20those%20used%20in%20the%20algorithm%20training%2C%20we%20reach%20recall%20values%20greater%20than%2060%25%20and%20precision%20values%20greater%20than%2090%25.%20This%20approach%20has%20shown%20the%20potential%20to%20implement%20an%20adaptive%20algorithm%20that%20allows%2C%20after%20a%20small%20amount%20of%20retraining%20with%20data%20detected%20from%20a%20new%20map%2C%20a%20better%20general%20mound%20feature%20detection%20in%20the%20same%20map.%22%2C%22date%22%3A%222023-07-12%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1038%5C%2Fs41598-023-38190-x%22%2C%22ISSN%22%3A%222045-2322%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.nature.com%5C%2Farticles%5C%2Fs41598-023-38190-x%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-12T12%3A17%3A18Z%22%7D%7D%2C%7B%22key%22%3A%22J27FMZFE%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Garcia-Molsosa%20et%20al.%22%2C%22parsedDate%22%3A%222023-07-10%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGarcia-Molsosa%2C%20A.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.nature.com%5C%2Farticles%5C%2Fs40494-023-00985-6%26%23039%3B%26gt%3BReconstructing%20long-term%20settlement%20histories%20on%20complex%20alluvial%20floodplains%20by%20integrating%20historical%20map%20analysis%20and%20remote-sensing%3A%20an%20archaeological%20analysis%20of%20the%20landscape%20of%20the%20Indus%20River%20Basin%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Reconstructing%20long-term%20settlement%20histories%20on%20complex%20alluvial%20floodplains%20by%20integrating%20historical%20map%20analysis%20and%20remote-sensing%3A%20an%20archaeological%20analysis%20of%20the%20landscape%20of%20the%20Indus%20River%20Basin%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Arnau%22%2C%22lastName%22%3A%22Garcia-Molsosa%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hector%20A.%22%2C%22lastName%22%3A%22Orengo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cameron%20A.%22%2C%22lastName%22%3A%22Petrie%22%7D%5D%2C%22abstractNote%22%3A%22Alluvial%20floodplains%20were%20one%20of%20the%20major%20venues%20of%20the%20development%20and%20long-term%20transformation%20of%20urban%20agrarian-based%20societies.%20The%20historical%20relationship%20between%20human%20societies%20and%20riverine%20environments%20created%20a%20rich%20archaeological%20record%2C%20but%20it%20is%20one%20that%20is%20not%20always%20easy%20to%20access%20due%20to%20the%20dynamism%20of%20alluvial%20floodplains%20and%20the%20geomorphological%20processes%20driven%20their%20hydrological%20regimes.%20Alluvial%20floodplains%20are%20also%20targeted%20for%20urban%20and%20agricultural%20expansion%2C%20which%20both%20have%20the%20potential%20to%20pose%20threats%20to%20cultural%20heritage%20and%20the%20environment%20if%20not%20carefully%20managed.%20Analysis%20that%20combines%20Historical%20Cartography%20and%20Remote%20Sensing%20sources%20to%20identify%20potential%20archaeological%20sites%20and%20river%20palaeochannels%20is%20an%20important%20first%20step%20towards%20the%20reconstruction%20of%20settlement%20patterns%20in%20different%20historical%20periods%20and%20their%20relationship%20to%20the%20history%20of%20hydrological%20networks.%20We%20are%20able%20to%20use%20different%20computational%20methods%20to%20great%20effect%2C%20including%20algorithms%20to%20enhance%20the%20visualization%20of%20different%20features%20of%20the%20landscape%3B%20and%20for%20processing%20large%20quantity%20of%20data%20using%20Machine-Learning%20based%5Cu00a0methods.%20Here%20we%20integrate%20those%20methods%20for%20the%5Cu00a0first%20time%20in%20a%20single%20study%20case%3A%20a%20section%20of%20the%20Indus%20River%20basin.%20Using%20a%20combined%20approach%2C%20it%20has%20been%20possible%20to%20map%20the%20historical%20hydrological%20network%20in%20a%20detail%20never%20achieved%20before%20and%20identify%20hundreds%20of%20potential%20archaeological%20sites%20previously%20unknown.%20Discussing%20these%20datasets%20together%2C%20we%20address%20the%20interpretation%20of%20the%20archaeological%20record%2C%20and%20highlight%20how%20Remote%20Sensing%20approaches%20can%20inform%20future%20research%2C%20heritage%20documentation%2C%20management%2C%20and%20preservation.%20The%20paper%20concludes%20with%20a%20targeted%20analysis%20of%20our%20datasets%20in%20the%20light%20of%20previous%20field-based%20research%20in%20order%20to%20provide%20preliminary%20insights%20on%20how%20long-term%20processes%20might%20have%20re-worked%20historical%20landscapes%20and%20their%20potential%20implications%20for%20the%20study%20of%20settlement%20patterns%20in%20different%20Historical%20periods%20in%20this%20region%2C%20thereby%20highlighting%20the%20potential%20for%20such%20integrated%20approaches.%22%2C%22date%22%3A%222023-07-10%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1186%5C%2Fs40494-023-00985-6%22%2C%22ISSN%22%3A%222050-7445%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.nature.com%5C%2Farticles%5C%2Fs40494-023-00985-6%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-12T12%3A17%3A21Z%22%7D%7D%2C%7B%22key%22%3A%22F7IINSR5%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhang%20et%20al.%22%2C%22parsedDate%22%3A%222022-10-28%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZhang%2C%20H.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F11%5C%2F540%26%23039%3B%26gt%3BMachine%20Recognition%20of%20Map%20Point%20Symbols%20Based%20on%20YOLOv3%20and%20Automatic%20Configuration%20Associated%20with%20POI%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Machine%20Recognition%20of%20Map%20Point%20Symbols%20Based%20on%20YOLOv3%20and%20Automatic%20Configuration%20Associated%20with%20POI%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Huili%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiaowen%22%2C%22lastName%22%3A%22Zhou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Huan%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ge%22%2C%22lastName%22%3A%22Zhu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hongwei%22%2C%22lastName%22%3A%22Li%22%7D%5D%2C%22abstractNote%22%3A%22This%20study%20is%20oriented%20towards%20machine%20autonomous%20mapping%20and%20the%20need%20to%20improve%20the%20efficiency%20of%20map%20point%20symbol%20recognition%20and%20configuration.%20Therefore%2C%20an%20intelligent%20recognition%20method%20for%20point%20symbols%20was%20developed%20using%20the%20You%20Only%20Look%20Once%20Version%203%20%28YOLOv3%29%20algorithm%20along%20with%20the%20Convolutional%20Block%20Attention%20Module%20%28CBAM%29.%20Then%2C%20the%20recognition%20results%20of%20point%20symbols%20were%20associated%20with%20the%20point%20of%20interest%20%28POI%29%20to%20achieve%20automatic%20configuration.%20To%20quantitatively%20analyze%20the%20recognition%20effectiveness%20of%20this%20study%20algorithm%20and%20the%20comparison%20algorithm%20for%20map%20point%20symbols%2C%20the%20recall%2C%20precision%20and%20mean%20average%20precision%20%28mAP%29%20were%20employed%20as%20evaluation%20metrics.%20The%20experimental%20results%20indicate%20that%20the%20recognition%20efficiency%20of%20point%20symbols%20is%20enhanced%20compared%20to%20the%20original%20YOLOv3%20algorithm%2C%20and%20that%20the%20mAP%20is%20increased%20by%200.55%25.%20Compared%20to%20the%20Single%20Shot%20MultiBox%20Detector%20%28SSD%29%20algorithm%20and%20Faster%20Region-based%20Convolutional%20Neural%20Network%20%28Faster%20RCNN%29%20algorithm%2C%20the%20precision%2C%20recall%20rate%2C%20and%20mAP%20all%20performed%20well%2C%20achieving%2097.06%25%2C%2099.72%25%20and%2099.50%25%2C%20respectively.%20On%20this%20basis%2C%20the%20recognized%20point%20symbols%20are%20associated%20with%20POI%2C%20and%20the%20coordinate%20of%20point%20symbols%20are%20assigned%20through%20keyword%20matching%20and%20enrich%20their%20attribute%20information.%20This%20enables%20automatic%20configuration%20of%20point%20symbols%20and%20achieves%20a%20relatively%20good%20effect%20of%20map%20configuration.%22%2C%22date%22%3A%222022-10-28%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi11110540%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F11%5C%2F540%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-10-17T18%3A07%3A40Z%22%7D%7D%2C%7B%22key%22%3A%22HZXXHKN6%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Vass%5Cu00e1nyi%20and%20Gede%22%2C%22parsedDate%22%3A%222021-12-03%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BVass%5Cu00e1nyi%2C%20G.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.proc-int-cartogr-assoc.net%5C%2F4%5C%2F109%5C%2F2021%5C%2F%26%23039%3B%26gt%3BAutomatic%20vectorization%20of%20point%20symbols%20on%20archive%20maps%20using%20deep%20convolutional%20neural%20network%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Automatic%20vectorization%20of%20point%20symbols%20on%20archive%20maps%20using%20deep%20convolutional%20neural%20network%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gergely%22%2C%22lastName%22%3A%22Vass%5Cu00e1nyi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22M%5Cu00e1ty%5Cu00e1s%22%2C%22lastName%22%3A%22Gede%22%7D%5D%2C%22abstractNote%22%3A%22Archive%20topographical%20maps%20are%20a%20key%20source%20of%20geographical%20information%20from%20past%20ages%2C%20which%20can%20be%20valuable%20for%20several%20science%20fields.%20Since%20manual%20digitization%20is%20usually%20slow%20and%20takes%20much%20human%20resource%2C%20automatic%20methods%20are%20preferred%2C%20such%20as%20deep%20learning%20algorithms.%20Although%20automatic%20vectorization%20is%20a%20common%20problem%2C%20there%20have%20been%20few%20approaches%20regarding%20point%20symbols.%20In%20this%20paper%2C%20a%20point%20symbol%20vectorization%20method%20is%20proposed%2C%20which%20was%20tested%20on%20Third%20Military%20Survey%20map%20sheets%20using%20a%20Mask%20Regional%20Convolutional%20Neural%20Network%20%28MRCNN%29.%20The%20MRCNN%20implementation%20uses%20the%20ResNet101%20network%20improved%20with%20the%20Feature%20Pyramid%20Network%20architecture%20and%20is%20developed%20in%20a%20Google%20Colab%20environment.%20The%20pretrained%20network%20was%20trained%20on%20four%20point%20symbol%20categories%20simultaneously.%20Results%20show%2090%25%20accuracy%2C%20while%2094%25%20of%20symbols%20detected%20for%20some%20categories%20on%20the%20complete%20test%20sheet.%22%2C%22date%22%3A%222021%5C%2F12%5C%2F03%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.5194%5C%2Fica-proc-4-109-2021%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.proc-int-cartogr-assoc.net%5C%2F4%5C%2F109%5C%2F2021%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-07-31T16%3A33%3A54Z%22%7D%7D%2C%7B%22key%22%3A%22UZH2NK7Y%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Guo%20et%20al.%22%2C%22parsedDate%22%3A%222021-12-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGuo%2C%20M.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0098300421002302%26%23039%3B%26gt%3BDeep%20learning%20framework%20for%20geological%20symbol%20detection%20on%20geological%20maps%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep%20learning%20framework%20for%20geological%20symbol%20detection%20on%20geological%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22MingQiang%22%2C%22lastName%22%3A%22Guo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weijia%22%2C%22lastName%22%3A%22Bei%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ying%22%2C%22lastName%22%3A%22Huang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhanlong%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiaozhen%22%2C%22lastName%22%3A%22Zhao%22%7D%5D%2C%22abstractNote%22%3A%22Dynamic%20legend%20generation%20for%20geological%20maps%20aims%20to%20detect%20and%20identify%20geological%20map%20symbols%20within%20the%20current%20viewshed%20and%20generate%20a%20corresponding%20real-time%20legend%20to%20help%20users%20quickly%20obtain%20the%20name%20and%20meaning%20of%20symbols.%20Detection%20and%20recognition%20entail%20high%20complexity%20and%20uncertainty%20because%20of%20the%20diversity%20of%20symbol%20types%20and%20the%20randomness%20of%20symbol%20distribution%2C%20and%20thus%20the%20generation%20of%20dynamic%20legends%20for%20geological%20maps%20is%20challenging.%20A%20new%20framework%20based%20on%20deep%20learning%20is%20proposed%20in%20this%20study%2C%20combining%20the%20deep%20convolutional%20neural%20network%20%28CNN%29%20and%20graph%20convolutional%20network%20%28GCN%29%20to%20realize%20the%20extraction%20and%20recognition%20of%20geological%20map%20symbols.%20Within%20the%20framework%2C%20a%20CNN-based%20model%20called%20single%20symbol%20detection%20network%20%28SSDN%29%20is%20developed%20to%20detect%20and%20identify%20single%20geological%20map%20symbols%2C%20and%20a%20novel%20GCN%20combined%20with%20L2%20distance%20attention%20%28DAGCN%29%20is%20proposed%20to%20deal%20with%20the%20difficulty%20of%20extracting%20compound%20symbols%20caused%20by%20the%20randomness%20of%20symbol%20distribution.%20This%20work%20systematically%20solves%20the%20problem%20of%20geological%20symbol%20detection%20with%20the%20aid%20of%20object%20detection%20technology%20based%20on%20deep%20learning%2C%20providing%20foundation%20for%20the%20dynamic%20legend%20generation.%20Experiments%20show%20that%20the%20framework%20of%20the%20proposed%20method%20is%20effective%2C%20and%20a%20new%20benchmark%20is%20established%20for%20geological%20symbol%20detection%20on%20geological%20maps.%20All%20of%20our%20data%20and%20code%20are%20publicly%20available.%22%2C%22date%22%3A%222021-12-01%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.cageo.2021.104943%22%2C%22ISSN%22%3A%220098-3004%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0098300421002302%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A58%3A22Z%22%7D%7D%2C%7B%22key%22%3A%22DI2KTT9L%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Kong%20et%20al.%22%2C%22parsedDate%22%3A%222021-08-12%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKong%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.hindawi.com%5C%2Fjournals%5C%2Fcomplexity%5C%2F2021%5C%2F8235108%5C%2F%26%23039%3B%26gt%3BA%20Mountain%20Summit%20Recognition%20Method%20Based%20on%20Improved%20Faster%20R-CNN%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20Mountain%20Summit%20Recognition%20Method%20Based%20on%20Improved%20Faster%20R-CNN%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yueping%22%2C%22lastName%22%3A%22Kong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yun%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Song%22%2C%22lastName%22%3A%22Guo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiajing%22%2C%22lastName%22%3A%22Wang%22%7D%5D%2C%22abstractNote%22%3A%22Mountain%20summits%20are%20vital%20topographic%20feature%20points%2C%20which%20are%20essential%20for%20understanding%20landform%20processes%20and%20their%20impacts%20on%20the%20environment%20and%20ecosystem.%20Traditional%20summit%20detection%20methods%20operate%20on%20handcrafted%20features%20extracted%20from%20digital%20elevation%20model%20%28DEM%29%20data%20and%20apply%20parametric%20detection%20algorithms%20to%20locate%20mountain%20summits.%20However%2C%20these%20methods%20may%20no%20longer%20be%20effective%20to%20achieve%20desirable%20recognition%20results%20in%20small%20summits%20and%20suffer%20from%20the%20objective%20criterion%20lacking%20problem.%20Thus%2C%20to%20address%20these%20problems%2C%20we%20propose%20an%20improved%20Faster%20region-convolutional%20neural%20network%20%28R-CNN%29%20to%20accurately%20detect%20the%20mountain%20summits%20from%20DEM%20data.%20Based%20on%20Faster%20R-CNN%2C%20the%20improved%20network%20adopts%20a%20residual%20convolution%20block%20to%20replace%20the%20traditional%20part%20and%20adds%20a%20feature%20pyramid%20network%20%28FPN%29%20to%20fuse%20the%20features%20with%20adjacent%20layers%20to%20better%20address%20the%20mountain%20summit%20detection%20task.%20The%20residual%20convolution%20is%20employed%20to%20capture%20the%20deep%20correlation%20between%20visual%20and%20physical%20morphological%20features.%20The%20FPN%20is%20utilized%20to%20integrate%20the%20location%20and%20semantic%20information%20in%20the%20extracted%20feature%20maps%20to%20effectively%20represent%20the%20mountain%20summit%20area.%20The%20experimental%20results%20demonstrate%20that%20the%20proposed%20network%20could%20achieve%20the%20highest%20recall%20and%20precision%20without%20manually%20designed%20summit%20features%20and%20accurately%20identify%20small%20summits.%22%2C%22date%22%3A%222021%5C%2F8%5C%2F12%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1155%5C%2F2021%5C%2F8235108%22%2C%22ISSN%22%3A%221076-2787%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.hindawi.com%5C%2Fjournals%5C%2Fcomplexity%5C%2F2021%5C%2F8235108%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A04%3A27Z%22%7D%7D%2C%7B%22key%22%3A%22949T2BWM%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Saeedimoghaddam%20and%20Stepinski%22%2C%22parsedDate%22%3A%222020-05-03%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSaeedimoghaddam%2C%20M.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2019.1696968%26%23039%3B%26gt%3BAutomatic%20extraction%20of%20road%20intersection%20points%20from%20USGS%20historical%20map%20series%20using%20deep%20convolutional%20neural%20networks%26lt%3B%5C%2Fa%26gt%3B.%202020%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Automatic%20extraction%20of%20road%20intersection%20points%20from%20USGS%20historical%20map%20series%20using%20deep%20convolutional%20neural%20networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mahmoud%22%2C%22lastName%22%3A%22Saeedimoghaddam%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22T.%20F.%22%2C%22lastName%22%3A%22Stepinski%22%7D%5D%2C%22abstractNote%22%3A%22Road%20intersection%20data%20have%20been%20used%20across%20a%20range%20of%20geospatial%20analyses.%20However%2C%20many%20datasets%20dating%20from%20before%20the%20advent%20of%20GIS%20are%20only%20available%20as%20historical%20printed%20maps.%20To%20be%20analyzed%20by%20GIS%20software%2C%20they%20need%20to%20be%20scanned%20and%20transformed%20into%20a%20usable%20%28vector-based%29%20format.%20Because%20the%20number%20of%20scanned%20historical%20maps%20is%20voluminous%2C%20automated%20methods%20of%20digitization%20and%20transformation%20are%20needed.%20Frequently%2C%20these%20processes%20are%20based%20on%20computer%20vision%20algorithms.%20However%2C%20the%20key%20challenges%20to%20this%20are%20%281%29%20the%20low%20conversion%20accuracy%20for%20low%20quality%20and%20visually%20complex%20maps%2C%20and%20%282%29%20the%20selection%20of%20optimal%20parameters.%20In%20this%20paper%2C%20we%20used%20a%20region-based%20deep%20convolutional%20neural%20network-based%20framework%20%28RCNN%29%20for%20object%20detection%2C%20in%20order%20to%20automatically%20identify%20road%20intersections%20in%20historical%20maps%20of%20several%20cities%20in%20the%20United%20States%20of%20America.%20We%20found%20that%20the%20RCNN%20approach%20is%20more%20accurate%20than%20traditional%20computer%20vision%20algorithms%20for%20double-line%20cartographic%20representation%20of%20the%20roads%2C%20though%20its%20accuracy%20does%20not%20surpass%20all%20traditional%20methods%20used%20for%20single-line%20symbols.%20The%20results%20suggest%20that%20the%20number%20of%20errors%20in%20the%20outputs%20is%20sensitive%20to%20complexity%20and%20blurriness%20of%20the%20maps%2C%20and%20to%20the%20number%20of%20distinct%20red-green-blue%20%28RGB%29%20combinations%20within%20them.%22%2C%22date%22%3A%222020-05-03%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2019.1696968%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2019.1696968%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A06%3A10Z%22%7D%7D%2C%7B%22key%22%3A%22AKXRJH5Q%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Torres%20et%20al.%22%2C%22parsedDate%22%3A%222018-09%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BTorres%2C%20R.N.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F8527481%26%23039%3B%26gt%3BA%20Deep%20Learning%20Model%20for%20Identifying%20Mountain%20Summits%20in%20Digital%20Elevation%20Model%20Data%26lt%3B%5C%2Fa%26gt%3B.%202018%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22A%20Deep%20Learning%20Model%20for%20Identifying%20Mountain%20Summits%20in%20Digital%20Elevation%20Model%20Data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rocio%20Nahime%22%2C%22lastName%22%3A%22Torres%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Piero%22%2C%22lastName%22%3A%22Fraternali%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Federico%22%2C%22lastName%22%3A%22Milani%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Darian%22%2C%22lastName%22%3A%22Frajberg%22%7D%5D%2C%22abstractNote%22%3A%22Analyzing%20Digital%20Elevation%20Model%20%28DEM%29%20data%20to%20identify%20and%20classify%20landforms%20is%20an%20important%20task%2C%20which%20can%20contribute%20to%20improve%20the%20availability%20and%20quality%20of%20public%20open%20source%20cartography%20and%20to%20develop%20novel%20applications%20for%20tourism%20and%20environment%20monitoring.%20In%20the%20literature%2C%20several%20heuristic%20algorithms%20are%20documented%20for%20identifying%20the%20features%20of%20mountain%20regions%2C%20most%20notably%20the%20coordinate%20of%20summits.%20All%20these%20algorithms%20depend%20on%20parameters%2C%20which%20are%20manually%20set.%20In%20this%20paper%2C%20we%20explore%20the%20use%20of%20Deep%20Learning%20methods%20to%20train%20a%20model%20capable%20of%20identifying%20mountain%20summits%2C%20which%20learns%20from%20a%20gold%20standard%20dataset%20containing%20the%20coordinates%20of%20peaks%20in%20a%20region.%20The%20model%20has%20been%20trained%20and%20tested%20with%20Switzerland%20DEM%20and%20peak%20data.%22%2C%22date%22%3A%222018-09%22%2C%22proceedingsTitle%22%3A%222018%20IEEE%20First%20International%20Conference%20on%20Artificial%20Intelligence%20and%20Knowledge%20Engineering%20%28AIKE%29%22%2C%22conferenceName%22%3A%222018%20IEEE%20First%20International%20Conference%20on%20Artificial%20Intelligence%20and%20Knowledge%20Engineering%20%28AIKE%29%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FAIKE.2018.00049%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F8527481%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A14%3A28Z%22%7D%7D%5D%7D
Duan, W. et al. DIGMAPPER: A Modular System for Automated Geologic Map Digitization. 2025
Berganzo-Besga, I. et al. Curriculum learning-based strategy for low-density archaeological mound detection from historical maps in India and Pakistan. 2023
Vassányi, G. et al. Automatic vectorization of point symbols on archive maps using deep convolutional neural network. 2021
Guo, M. et al. Deep learning framework for geological symbol detection on geological maps. 2021
Kong, Y. et al. A Mountain Summit Recognition Method Based on Improved Faster R-CNN. 2021
Saeedimoghaddam, M. et al. Automatic extraction of road intersection points from USGS historical map series using deep convolutional neural networks. 2020
Torres, R.N. et al. A Deep Learning Model for Identifying Mountain Summits in Digital Elevation Model Data. 2018
Feature Extraction (Lines)
5447768
feature extraction, lines
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%2278LAZUI5%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Kramm%20et%20al.%22%2C%22parsedDate%22%3A%222025-07-05%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKramm%2C%20T.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.nature.com%5C%2Farticles%5C%2Fs41597-025-05442-6%26%23039%3B%26gt%3BDeep%20learning-based%20extraction%20of%20Kenya%26%23039%3Bs%20historical%20road%20network%20from%20topographic%20maps%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep%20learning-based%20extraction%20of%20Kenya%27s%20historical%20road%20network%20from%20topographic%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tanja%22%2C%22lastName%22%3A%22Kramm%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nicodemus%22%2C%22lastName%22%3A%22Nyamari%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Vincent%22%2C%22lastName%22%3A%22Moseti%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Annika%22%2C%22lastName%22%3A%22Klee%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Leon%22%2C%22lastName%22%3A%22Vehlken%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22David%20M.%22%2C%22lastName%22%3A%22Anderson%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Christina%22%2C%22lastName%22%3A%22Bogner%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Georg%22%2C%22lastName%22%3A%22Bareth%22%7D%5D%2C%22abstractNote%22%3A%22Kenya%5Cu2019s%20road%20network%20significantly%20influences%20environmental%20and%20socio-economic%20dynamics.%20High-quality%20road%20data%20is%20essential%20for%20analyzing%20its%20impact%20on%20various%20factors%2C%20including%20land-use%2C%20biodiversity%2C%20migration%2C%20livelihoods%2C%20and%20economy.%20Like%20many%20countries%2C%20Kenya%20faces%20challenges%20in%20the%20availability%20of%20accurate%20and%20detailed%20digital%20historical%20road%20datasets.%20To%20address%20this%2C%20we%20used%20deep%20learning%20techniques%20to%20extract%20Kenya%5Cu2019s%20road%20network%20from%20533%20historical%20topographic%20maps%20%281%3A50%2C000%20and%201%3A100%2C000%20scale%29%20covering%20the%201950s%5Cu20131980s.%20This%20involved%20digitizing%2C%20georeferencing%2C%20and%20classifying%20of%2020%20different%20road%20symbols%20on%20all%20maps%2C%20then%20converting%20and%20merging%20them%20into%20a%20seamless%20dataset.%20The%20statistical%20evaluation%20was%20conducted%20against%20manually%20created%20roads%20from%20seven%20representative%20map%20sheets%20by%20calculating%20precision%2C%20recall%2C%20and%20F1%20score.%20Our%20study%20provides%20a%20detailed%20historical%20road%20dataset%20for%20Kenya%20containing%20over%2056%2C000%20km%20of%20historical%20roads.%20The%20statistical%20validation%20showed%20an%20average%20F1%20score%20of%200.84%2C%20indicating%20a%20high%20classification%20performance.%20The%20methodology%20offers%20an%20applicable%20approach%20for%20national-level%20historic%20road%20network%20mapping%2C%20also%20transferable%20to%20other%20regions%2C%20map%20types%20or%20features.%22%2C%22date%22%3A%222025-07-05%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1038%5C%2Fs41597-025-05442-6%22%2C%22ISSN%22%3A%222052-4463%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.nature.com%5C%2Farticles%5C%2Fs41597-025-05442-6%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-12T12%3A12%3A27Z%22%7D%7D%2C%7B%22key%22%3A%223AEBC7WA%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Duan%20et%20al.%22%2C%22parsedDate%22%3A%222025-06-19%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BDuan%2C%20W.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2506.16006%26%23039%3B%26gt%3BDIGMAPPER%3A%20A%20Modular%20System%20for%20Automated%20Geologic%20Map%20Digitization%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22DIGMAPPER%3A%20A%20Modular%20System%20for%20Automated%20Geologic%20Map%20Digitization%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weiwei%22%2C%22lastName%22%3A%22Duan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%20P.%22%2C%22lastName%22%3A%22Gerlek%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Steven%20N.%22%2C%22lastName%22%3A%22Minton%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Craig%20A.%22%2C%22lastName%22%3A%22Knoblock%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fandel%22%2C%22lastName%22%3A%22Lin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Theresa%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Leeje%22%2C%22lastName%22%3A%22Jang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sofia%22%2C%22lastName%22%3A%22Kirsanova%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zekun%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yijun%22%2C%22lastName%22%3A%22Lin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yao-Yi%22%2C%22lastName%22%3A%22Chiang%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20geologic%20maps%20contain%20rich%20geospatial%20information%2C%20such%20as%20rock%20units%2C%20faults%2C%20folds%2C%20and%20bedding%20planes%2C%20that%20is%20critical%20for%20assessing%20mineral%20resources%20essential%20to%20renewable%20energy%2C%20electric%20vehicles%2C%20and%20national%20security.%20However%2C%20digitizing%20maps%20remains%20a%20labor-intensive%20and%20time-consuming%20task.%20We%20present%20DIGMAPPER%2C%20a%20modular%2C%20scalable%20system%20developed%20in%20collaboration%20with%20the%20United%20States%20Geological%20Survey%20%28USGS%29%20to%20automate%20the%20digitization%20of%20geologic%20maps.%20DIGMAPPER%20features%20a%20fully%20dockerized%2C%20workflow-orchestrated%20architecture%20that%20integrates%20state-of-the-art%20deep%20learning%20models%20for%20map%20layout%20analysis%2C%20feature%20extraction%2C%20and%20georeferencing.%20To%20overcome%20challenges%20such%20as%20limited%20training%20data%20and%20complex%20visual%20content%2C%20our%20system%20employs%20innovative%20techniques%2C%20including%20in-context%20learning%20with%20large%20language%20models%2C%20synthetic%20data%20generation%2C%20and%20transformer-based%20models.%20Evaluations%20on%20over%20100%20annotated%20maps%20from%20the%20DARPA-USGS%20dataset%20demonstrate%20high%20accuracy%20across%20polygon%2C%20line%2C%20and%20point%20feature%20extraction%2C%20and%20reliable%20georeferencing%20performance.%20Deployed%20at%20USGS%2C%20DIGMAPPER%20significantly%20accelerates%20the%20creation%20of%20analysis-ready%20geospatial%20datasets%2C%20supporting%20national-scale%20critical%20mineral%20assessments%20and%20broader%20geoscientific%20applications.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2506.16006%22%2C%22date%22%3A%222025-06-19%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2506.16006%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2506.16006%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-10T19%3A42%3A19Z%22%7D%7D%2C%7B%22key%22%3A%22Y7FRDRV4%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22L%5Cu00f3pez-Rauhut%20et%20al.%22%2C%22parsedDate%22%3A%222025-05-30%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BL%5Cu00f3pez-Rauhut%2C%20M.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2505.24824%26%23039%3B%26gt%3BSegmenting%20France%20Across%20Four%20Centuries%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Segmenting%20France%20Across%20Four%20Centuries%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Marta%22%2C%22lastName%22%3A%22L%5Cu00f3pez-Rauhut%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hongyu%22%2C%22lastName%22%3A%22Zhou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mathieu%22%2C%22lastName%22%3A%22Aubry%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Loic%22%2C%22lastName%22%3A%22Landrieu%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20maps%20offer%20an%20invaluable%20perspective%20into%20territory%20evolution%20across%20past%20centuries--long%20before%20satellite%20or%20remote%20sensing%20technologies%20existed.%20Deep%20learning%20methods%20have%20shown%20promising%20results%20in%20segmenting%20historical%20maps%2C%20but%20publicly%20available%20datasets%20typically%20focus%20on%20a%20single%20map%20type%20or%20period%2C%20require%20extensive%20and%20costly%20annotations%2C%20and%20are%20not%20suited%20for%20nationwide%2C%20long-term%20analyses.%20In%20this%20paper%2C%20we%20introduce%20a%20new%20dataset%20of%20historical%20maps%20tailored%20for%20analyzing%20large-scale%2C%20long-term%20land%20use%20and%20land%20cover%20evolution%20with%20limited%20annotations.%20Spanning%20metropolitan%20France%20%28548%2C305%20km%5E2%29%2C%20our%20dataset%20contains%20three%20map%20collections%20from%20the%2018th%2C%2019th%2C%20and%2020th%20centuries.%20We%20provide%20both%20comprehensive%20modern%20labels%20and%2022%2C878%20km%5E2%20of%20manually%20annotated%20historical%20labels%20for%20the%2018th%20and%2019th%20century%20maps.%20Our%20dataset%20illustrates%20the%20complexity%20of%20the%20segmentation%20task%2C%20featuring%20stylistic%20inconsistencies%2C%20interpretive%20ambiguities%2C%20and%20significant%20landscape%20changes%20%28e.g.%2C%20marshlands%20disappearing%20in%20favor%20of%20forests%29.%20We%20assess%20the%20difficulty%20of%20these%20challenges%20by%20benchmarking%20three%20approaches%3A%20a%20fully-supervised%20model%20trained%20with%20historical%20labels%2C%20and%20two%20weakly-supervised%20models%20that%20rely%20only%20on%20modern%20annotations.%20The%20latter%20either%20use%20the%20modern%20labels%20directly%20or%20first%20perform%20image-to-image%20translation%20to%20address%20the%20stylistic%20gap%20between%20historical%20and%20contemporary%20maps.%20Finally%2C%20we%20discuss%20how%20these%20methods%20can%20support%20long-term%20environment%20monitoring%2C%20offering%20insights%20into%20centuries%20of%20landscape%20transformation.%20Our%20official%20project%20repository%20is%20publicly%20available%20at%20https%3A%5C%2F%5C%2Fgithub.com%5C%2FArchiel19%5C%2FFRAx4.git.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2505.24824%22%2C%22date%22%3A%222025-05-30%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2505.24824%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2505.24824%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-19T12%3A29%3A31Z%22%7D%7D%2C%7B%22key%22%3A%22SK5JVH7G%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Vynikal%20and%20Pacina%22%2C%22parsedDate%22%3A%222025-05%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BVynikal%2C%20J.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F14%5C%2F5%5C%2F201%26%23039%3B%26gt%3BAutomatic%20Elevation%20Contour%20Vectorization%3A%20A%20Case%20Study%20in%20a%20Deep%20Learning%20Approach%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Automatic%20Elevation%20Contour%20Vectorization%3A%20A%20Case%20Study%20in%20a%20Deep%20Learning%20Approach%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jakub%22%2C%22lastName%22%3A%22Vynikal%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jan%22%2C%22lastName%22%3A%22Pacina%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20maps%20contain%20valuable%20topographic%20information%2C%20including%20altimetry%20in%20the%20form%20of%20annotated%20elevation%20contours.%20These%20contours%20are%20essential%20for%20understanding%20past%20terrain%20configurations%2C%20particularly%20in%20areas%20affected%20by%20human%20activities%20such%20as%20mining%20or%20dam%20construction.%20To%20make%20this%20data%20usable%20in%20modern%20GIS%20applications%2C%20the%20contours%20must%20be%20vectorized%5Cu2014a%20process%20that%20often%20requires%20extensive%20manual%20work%20due%20to%20noise%2C%20inconsistent%20symbology%2C%20and%20topological%20disruptions%20like%20annotations%20or%20sheet%20boundaries.%20In%20this%20study%2C%20we%20apply%20a%20convolutional%20neural%20network%20%28U-Net%29%20to%20improve%20the%20automation%20of%20this%20vectorization%20process.%20Leveraging%20a%20recent%20benchmark%20for%20historical%20map%20vectorization%2C%20our%20method%20demonstrates%20increased%20robustness%20to%20disruptive%20factors%20and%20reduces%20the%20need%20for%20manual%20corrections.%22%2C%22date%22%3A%222025%5C%2F5%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi14050201%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F14%5C%2F5%5C%2F201%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-12T22%3A59%3A09Z%22%7D%7D%2C%7B%22key%22%3A%22I6YA9RDI%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Kurochkin%20et%20al.%22%2C%22parsedDate%22%3A%222025%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKurochkin%2C%20V.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-031-94273-0_12%26%23039%3B%26gt%3BU-Net%20Models%20Enhanced%20by%20Generated%20Training%20Data%20for%20Automatic%20Isolines%20Extraction%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22U-Net%20Models%20Enhanced%20by%20Generated%20Training%20Data%20for%20Automatic%20Isolines%20Extraction%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Vladislav%22%2C%22lastName%22%3A%22Kurochkin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yury%22%2C%22lastName%22%3A%22Karyakin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Irina%22%2C%22lastName%22%3A%22Donkova%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Arthur%22%2C%22lastName%22%3A%22Gibadullin%22%7D%5D%2C%22abstractNote%22%3A%22This%20study%20presents%20an%20approach%20to%20isolines%20extraction%20in%20topographic%20maps%20through%20the%20integration%20of%20deep%20learning%20techniques%20with%20automated%20training%20data%20generation.%20A%20geologic%20map%20image%20generator%20based%20on%20Perlin%20Noise%20was%20developed%20to%20augment%20existing%20datasets%2C%20addressing%20the%20challenge%20of%20limited%20annotated%20data%20in%20geospatial%20informatics.%20The%20generated%20synthetic%20data%20improved%20the%20performance%20of%20U-Net%20models%20in%20semantic%20segmentation%20tasks.%20Experimental%20evaluations%20revealed%20a%206%25%20increase%20in%20the%20Dice%20coefficient%20and%20a%2054%25%20rise%20in%20precision%20compared%20to%20baseline%20models.%20These%20results%20highlight%20the%20effectiveness%20of%20the%20proposed%20method%20in%20enhancing%20the%20generalizability%20of%20isolines%20extraction%20systems%2C%20making%20it%20a%20valuable%20tool%20for%20automating%20geospatial%20data%20processing%20and%20analysis.%22%2C%22date%22%3A%222025%22%2C%22proceedingsTitle%22%3A%22Digital%20and%20Information%20Technologies%20in%20Economics%20and%20Management%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2F978-3-031-94273-0_12%22%2C%22ISBN%22%3A%22978-3-031-94273-0%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-031-94273-0_12%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-21T12%3A27%3A20Z%22%7D%7D%2C%7B%22key%22%3A%224IGGB8XV%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Sertel%20et%20al.%22%2C%22parsedDate%22%3A%222024-12%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSertel%2C%20E.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F13%5C%2F12%5C%2F464%26%23039%3B%26gt%3BAutomatic%20Road%20Extraction%20from%20Historical%20Maps%20Using%20Transformer-Based%20SegFormers%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Automatic%20Road%20Extraction%20from%20Historical%20Maps%20Using%20Transformer-Based%20SegFormers%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Elif%22%2C%22lastName%22%3A%22Sertel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Can%20Michael%22%2C%22lastName%22%3A%22Hucko%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mustafa%20Erdem%22%2C%22lastName%22%3A%22Kabaday%5Cu0131%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20maps%20are%20valuable%20sources%20of%20geospatial%20data%20for%20various%20geography-related%20applications%2C%20providing%20insightful%20information%20about%20historical%20land%20use%2C%20transportation%20infrastructure%2C%20and%20settlements.%20While%20transformer-based%20segmentation%20methods%20have%20been%20widely%20applied%20to%20image%20segmentation%20tasks%2C%20they%20have%20mostly%20focused%20on%20satellite%20images.%20There%20is%20a%20growing%20need%20to%20explore%20transformer-based%20approaches%20for%20geospatial%20object%20extraction%20from%20historical%20maps%2C%20given%20their%20superior%20performance%20over%20traditional%20convolutional%20neural%20network%20%28CNN%29-based%20architectures.%20In%20this%20research%2C%20we%20aim%20to%20automatically%20extract%20five%20different%20road%20types%20from%20historical%20maps%2C%20using%20a%20road%20dataset%20digitized%20from%20the%20scanned%20Deutsche%20Heereskarte%201%3A200%2C000%20T%5Cu00fcrkei%20%28DHK%20200%20Turkey%29%20maps.%20We%20applied%20the%20variants%20of%20the%20transformer-based%20SegFormer%20model%20and%20evaluated%20the%20effects%20of%20different%20encoders%2C%20batch%20sizes%2C%20loss%20functions%2C%20optimizers%2C%20and%20augmentation%20techniques%20on%20road%20extraction%20performance.%20Our%20best%20results%2C%20with%20an%20intersection%20over%20union%20%28IoU%29%20of%200.5411%20and%20an%20F1%20score%20of%200.7017%2C%20were%20achieved%20using%20the%20SegFormer-B2%20model%2C%20the%20Adam%20optimizer%2C%20and%20the%20focal%20loss%20function.%20All%20SegFormer-based%20experiments%20outperformed%20previously%20reported%20CNN-based%20segmentation%20models%20on%20the%20same%20dataset.%20In%20general%2C%20increasing%20the%20batch%20size%20and%20using%20larger%20SegFormer%20variants%20%28from%20B0%20to%20B2%29%20resulted%20in%20improved%20accuracy%20metrics.%20Additionally%2C%20the%20choice%20of%20augmentation%20techniques%20significantly%20influenced%20the%20outcomes.%20Our%20results%20demonstrate%20that%20SegFormer%20models%20substantially%20enhance%20true%20positive%20predictions%20and%20resulted%20in%20higher%20precision%20metric%20values.%20These%20findings%20suggest%20that%20the%20output%20weights%20could%20be%20directly%20applied%20to%20transfer%20learning%20for%20similar%20historical%20maps%20and%20the%20inference%20of%20additional%20DHK%20maps%2C%20while%20offering%20a%20promising%20architecture%20for%20future%20road%20extraction%20studies.%22%2C%22date%22%3A%222024%5C%2F12%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi13120464%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F13%5C%2F12%5C%2F464%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-01-14T20%3A17%3A28Z%22%7D%7D%2C%7B%22key%22%3A%22KGSAP32J%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhao%20et%20al.%22%2C%22parsedDate%22%3A%222024-07-16%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZhao%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs41651-024-00187-z%26%23039%3B%26gt%3BAU3-GAN%3A%20A%20Method%20for%20Extracting%20Roads%20from%20Historical%20Maps%20Based%20on%20an%20Attention%20Generative%20Adversarial%20Network%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22AU3-GAN%3A%20A%20Method%20for%20Extracting%20Roads%20from%20Historical%20Maps%20Based%20on%20an%20Attention%20Generative%20Adversarial%20Network%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yao%22%2C%22lastName%22%3A%22Zhao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guangxia%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jian%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tingting%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ziwei%22%2C%22lastName%22%3A%22Li%22%7D%5D%2C%22abstractNote%22%3A%22In%20recent%20years%2C%20the%20integration%20of%20deep%20learning%20technology%20based%20on%20convolutional%20neural%20networks%20with%20historical%20maps%20has%20made%20it%20possible%20to%20automatically%20extract%20roads%20from%20these%20maps%2C%20which%20is%20highly%20important%20for%20studying%20the%20evolution%20of%20transportation%20networks.%20However%2C%20the%20similarity%20between%20roads%20and%20other%20features%20%28such%20as%20contours%2C%20water%20systems%2C%20and%20administrative%20boundaries%29%20poses%20a%20significant%20challenge%20to%20the%20feature%20extraction%20capabilities%20of%20convolutional%20neural%20networks%20%28CNN%29.%20Additionally%2C%20CNN%20require%20a%20large%20quantity%20of%20labelled%20data%20for%20training%2C%20which%20can%20be%20a%20complex%20issue%20for%20historical%20maps.%20To%20address%20these%20limitations%2C%20we%20propose%20a%20method%20for%20extracting%20roads%20from%20historical%20maps%20based%20on%20an%20attention%20generative%20adversarial%20network.%20This%20approach%20leverages%20the%20unique%20architecture%20and%20training%20methodology%20of%20the%20generative%20adversarial%20network%20to%20augment%20datasets%20by%20generating%20data%20that%20closely%20resembles%20real%20samples.%20Meanwhile%2C%20we%20introduce%20an%20attention%20mechanism%20to%20enhance%20UNet3%5Cu2009%2B%5Cu2009and%20achieve%20accurate%20historical%20map%20road%20segmentation%20images.%20We%20validate%20our%20method%20using%20the%20Third%20Military%20Mapping%20Survey%20of%20Austria-Hungary%20and%20compare%20it%20with%20a%20typical%20U-shaped%20network.%20The%20experimental%20results%20show%20that%20our%20proposed%20method%20outperforms%20the%20direct%20use%20of%20the%20U-shaped%20network%2C%20achieving%20at%20least%20an%2018.26%25%20increase%20in%20F1%20and%20a%207.62%25%20increase%20in%20the%20MIoU%2C%20demonstrating%20its%20strong%20ability%20to%20extract%20roads%20from%20historical%20maps%20and%20provide%20a%20valuable%20reference%20for%20road%20extraction%20from%20other%20types%20of%20historical%20maps.%22%2C%22date%22%3A%222024-07-16%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2Fs41651-024-00187-z%22%2C%22ISSN%22%3A%222509-8829%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs41651-024-00187-z%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-07-17T13%3A04%3A45Z%22%7D%7D%2C%7B%22key%22%3A%22NE2XFP66%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Chen%20et%20al.%22%2C%22parsedDate%22%3A%222024-02-15%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BChen%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fjournals.plos.org%5C%2Fplosone%5C%2Farticle%3Fid%3D10.1371%5C%2Fjournal.pone.0298217%26%23039%3B%26gt%3BAutomatic%20vectorization%20of%20historical%20maps%3A%20A%20benchmark%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Automatic%20vectorization%20of%20historical%20maps%3A%20A%20benchmark%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yizi%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Joseph%22%2C%22lastName%22%3A%22Chazalon%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Edwin%22%2C%22lastName%22%3A%22Carlinet%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Minh%20%5Cu00d4n%20V%5Cu0169%22%2C%22lastName%22%3A%22Ngoc%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cl%5Cu00e9ment%22%2C%22lastName%22%3A%22Mallet%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Julien%22%2C%22lastName%22%3A%22Perret%22%7D%5D%2C%22abstractNote%22%3A%22Shape%20vectorization%20is%20a%20key%20stage%20of%20the%20digitization%20of%20large-scale%20historical%20maps%2C%20especially%20city%20maps%20that%20exhibit%20complex%20and%20valuable%20details.%20Having%20access%20to%20digitized%20buildings%2C%20building%20blocks%2C%20street%20networks%20and%20other%20geographic%20content%20opens%20numerous%20new%20approaches%20for%20historical%20studies%20such%20as%20change%20tracking%2C%20morphological%20analysis%20and%20density%20estimations.%20In%20the%20context%20of%20the%20digitization%20of%20Paris%20atlases%20created%20in%20the%2019th%20and%20early%2020th%20centuries%2C%20we%20have%20designed%20a%20supervised%20pipeline%20that%20reliably%20extract%20closed%20shapes%20from%20historical%20maps.%20This%20pipeline%20is%20based%20on%20a%20supervised%20edge%20filtering%20stage%20using%20deep%20filters%2C%20and%20a%20closed%20shape%20extraction%20stage%20using%20a%20watershed%20transform.%20It%20relies%20on%20probable%20multiple%20suboptimal%20methodological%20choices%20that%20hamper%20the%20vectorization%20performances%20in%20terms%20of%20accuracy%20and%20completeness.%20Objectively%20investigating%20which%20solutions%20are%20the%20most%20adequate%20among%20the%20numerous%20possibilities%20is%20comprehensively%20addressed%20in%20this%20paper.%20The%20following%20contributions%20are%20subsequently%20introduced%3A%20%28i%29%20we%20propose%20an%20improved%20training%20protocol%20for%20map%20digitization%3B%20%28ii%29%20we%20introduce%20a%20joint%20optimization%20of%20the%20edge%20detection%20and%20shape%20extraction%20stages%3B%20%28iii%29%20we%20compare%20the%20performance%20of%20state-of-the-art%20deep%20edge%20filters%20with%20topology-preserving%20loss%20functions%2C%20including%20vision%20transformers%3B%20%28iv%29%20we%20evaluate%20the%20end-to-end%20deep%20learnable%20watershed%20against%20Meyer%20watershed.%20We%20subsequently%20design%20the%20critical%20path%20for%20a%20fully%20automatic%20extraction%20of%20key%20elements%20of%20historical%20maps.%20All%20the%20data%2C%20code%2C%20benchmark%20results%20are%20freely%20available%20at%20https%3A%5C%2F%5C%2Fgithub.com%5C%2Fsoduco%5C%2FBenchmark_historical_map_vectorization.%22%2C%22date%22%3A%2215.02.2024%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1371%5C%2Fjournal.pone.0298217%22%2C%22ISSN%22%3A%221932-6203%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fjournals.plos.org%5C%2Fplosone%5C%2Farticle%3Fid%3D10.1371%5C%2Fjournal.pone.0298217%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-12T22%3A33%3A39Z%22%7D%7D%2C%7B%22key%22%3A%22THS4BEJA%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Jiao%20et%20al.%22%2C%22parsedDate%22%3A%222024%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BJiao%2C%20C.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Flinkinghub.elsevier.com%5C%2Fretrieve%5C%2Fpii%5C%2FS0198971523001230%26%23039%3B%26gt%3BA%20novel%20framework%20for%20road%20vectorization%20and%20classification%20from%20historical%20maps%20based%20on%20deep%20learning%20and%20symbol%20painting%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20novel%20framework%20for%20road%20vectorization%20and%20classification%20from%20historical%20maps%20based%20on%20deep%20learning%20and%20symbol%20painting%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chenjing%22%2C%22lastName%22%3A%22Jiao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Magnus%22%2C%22lastName%22%3A%22Heitzler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22Road%20networks%20in%20the%20past%20are%20imperative%20for%20understanding%20evolution%20of%20transportation%20infrastructure%2C%20urban%20sprawl%2C%20and%20route%20planning%2C%20etc.%20Various%20approaches%20have%20been%20developed%20for%20road%20extraction%20from%20historical%20maps%2C%20among%20which%20deep%20learning%20techniques%20stand%20out%20as%20the%20most%20effective%20ones.%20However%2C%20little%20attention%20has%20been%20paid%20to%20investigating%20road%20vectorization%20and%20classification%20from%20historical%20maps.%20Moreover%2C%20road%20classification%20via%20machine%20learning%20methods%20usually%20requires%20large%20amounts%20of%20dedicated%20training%20data.%20To%20address%20these%20issues%2C%20this%20paper%20proposes%20a%20novel%20and%20comprehensive%20framework%20for%20road%20vectorization%20and%20classification%20on%20the%20basis%20of%20road%20segmentation%20from%20historical%20maps.%20First%2C%20deep%20learning%20is%20used%20to%20get%20pixel-wise%20raster%20road%20segmentation%20results%2C%20which%20are%20further%20skeletonized%20using%20morphological%20operations.%20Then%2C%20considering%20that%20each%20road%20class%20is%20represented%20with%20a%20certain%20symbol%2C%20a%20painting%20function%20is%20defined%20for%20each%20class%20able%20to%20paint%20the%20corresponding%20symbol.%20These%20painting%20functions%20are%20then%20used%20to%20draw%20road%20segments%20along%20the%20skeletons.%20Since%20the%20start%20and%20end%20points%20in%20each%20painting%20function%20are%20used%20to%20vectorise%20the%20segment%2C%20this%20method%20achieves%20vectorization%20and%20classification%20at%20the%20same%20time.%20Our%20method%20is%20validated%20on%20four%20Siegfried%20map%20sheets%20in%20Switzerland%2C%20and%20evaluated%20via%20both%20visual%20and%20quantitative%20assessments.%20The%20results%20indicate%20that%20the%20method%20is%20capable%20of%20classifying%20roads%20accurately.%20In%20particular%2C%20two%20evaluation%20metrics%20completeness%20and%20correctness%20achieve%2090.69%25%20and%2072.71%25%20respectively%20for%20road%20class%202%20which%20accounts%20for%20the%20highest%20portion%20in%20the%20map.%20Moreover%2C%20the%20results%20of%20this%20method%20avoid%20the%20saw-toothed%20issue%20of%20vectorised%20road%20lines.%20This%20research%20is%20beneficial%20for%20creating%20complete%20vector%20road%20network%20datasets%20with%20class%20information%20to%20support%20decision-making%20in%20urban%20planning%20and%20transportation.%22%2C%22date%22%3A%2203%5C%2F2024%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.compenvurbsys.2023.102060%22%2C%22ISSN%22%3A%2201989715%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Flinkinghub.elsevier.com%5C%2Fretrieve%5C%2Fpii%5C%2FS0198971523001230%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-07-31T16%3A30%3A34Z%22%7D%7D%2C%7B%22key%22%3A%22ZZUAX9AQ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Xia%20et%20al.%22%2C%22parsedDate%22%3A%222023-11-20%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BXia%2C%20X.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3615886.3627738%26%23039%3B%26gt%3BContrastive%20Pretraining%20for%20Railway%20Detection%3A%20Unveiling%20Historical%20Maps%20with%20Transformers%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Contrastive%20Pretraining%20for%20Railway%20Detection%3A%20Unveiling%20Historical%20Maps%20with%20Transformers%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xue%22%2C%22lastName%22%3A%22Xia%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chenjing%22%2C%22lastName%22%3A%22Jiao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22Detecting%20railways%20from%20historical%20maps%20is%20challenging%20due%20to%20their%20infrequent%20representation%20in%20a%20map%20sheet%20and%20their%20visual%20similarity%20with%20roads.%20Basically%2C%20both%20railways%20and%20roads%20are%20symbolised%20as%20two%20parallel%20black%20lines%2C%20with%20slight%20differences%20only%20in%20line%20thickness.%20Recent%20advancements%20in%20transformer%20models%20for%20computer%20vision%20tasks%20have%20sparked%20interest%20in%20utilizing%20them%20for%20processing%20historical%20maps.%20However%2C%20the%20success%20of%20transformers%20heavily%20relies%20on%20large-scale%20labelled%20datasets%2C%20predominantly%20available%20for%20ground%20imagery%20rather%20than%20historical%20maps.%20To%20overcome%20these%20challenges%2C%20we%20exploit%20the%20unique%20spatial%20characteristics%20of%20historical%20map%20data%2C%20where%20the%20same%20location%20can%20be%20depicted%20over%20different%20time%20spans%20across%20different%20map%20series.%20For%20example%2C%20each%20location%20in%20Switzerland%20is%20depicted%20in%20both%20the%20Siegfried%20map%20and%20the%20Old%20National%20map%2C%20each%20exhibiting%20distinct%20symbols%20and%20drawing%20styles.%20In%20this%20work%2C%20we%20address%20the%20scarcity%20of%20labelled%20data%20by%20generating%20positive%20pairs%20of%20the%20same%20scene%20from%20different%20map%20series%20and%20employ%20self-supervised%20contrastive%20learning%20to%20pre-train%20a%20dedicated%20transformer%20encoder%20optimized%20for%20map%20data.%20Subsequently%2C%20we%20finetune%20the%20entire%20transformer%20network%20for%20the%20downstream%20railway%20detection%20task.%20Experimental%20results%20demonstrate%20that%20our%20method%20achieves%20comparable%20performance%20to%20fully%20supervised%20approaches%2C%20while%20significantly%20reducing%20the%20amount%20of%20required%20labelled%20dataset%20to%20a%20mere%202.5%25%20after%20contrastive%20pretraining.%22%2C%22date%22%3A%22November%2020%2C%202023%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%206th%20ACM%20SIGSPATIAL%20International%20Workshop%20on%20AI%20for%20Geographic%20Knowledge%20Discovery%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3615886.3627738%22%2C%22ISBN%22%3A%229798400703485%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3615886.3627738%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-04-26T09%3A53%3A41Z%22%7D%7D%2C%7B%22key%22%3A%22ZJRQTZR9%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22M%5Cu00e4yr%5Cu00e4%20et%20al.%22%2C%22parsedDate%22%3A%222023-11-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BM%5Cu00e4yr%5Cu00e4%2C%20J.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs13280-023-01838-z%26%23039%3B%26gt%3BUtilizing%20historical%20maps%20in%20identification%20of%20long-term%20land%20use%20and%20land%20cover%20changes%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Utilizing%20historical%20maps%20in%20identification%20of%20long-term%20land%20use%20and%20land%20cover%20changes%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Janne%22%2C%22lastName%22%3A%22M%5Cu00e4yr%5Cu00e4%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sonja%22%2C%22lastName%22%3A%22Kivinen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sarita%22%2C%22lastName%22%3A%22Keski-Saari%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Poikolainen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Timo%22%2C%22lastName%22%3A%22Kumpula%22%7D%5D%2C%22abstractNote%22%3A%22Knowledge%20in%20the%20magnitude%20and%20historical%20trends%20in%20land%20use%20and%20land%20cover%20%28LULC%29%20is%20needed%20to%20understand%20the%20changing%20status%20of%20the%20key%20elements%20of%20the%20landscape%20and%20to%20better%20target%20management%20efforts.%20However%2C%20this%20information%20is%20not%20easily%20available%20before%20the%20start%20of%20satellite%20campaign%20missions.%20Scanned%20historical%20maps%20are%20a%20valuable%20but%20underused%20source%20of%20LULC%20information.%20As%20a%20case%20study%2C%20we%20used%20U-Net%20to%20automatically%20extract%20fields%2C%20mires%2C%20roads%2C%20watercourses%2C%20and%20water%20bodies%20from%20scanned%20historical%20maps%2C%20dated%201965%2C%201984%20and%201985%20for%20our%20900%5Cu00a0km%5Cu00b2%20study%20area%20in%20Southern%20Finland.%20We%20then%20used%20these%20data%2C%20along%20with%20the%20topographic%20databases%20from%202005%20and%202022%2C%20to%20quantify%20the%20LULC%20changes%20for%20the%20past%2057%20years.%20For%20example%2C%20the%20total%20area%20of%20fields%20decreased%20by%20around%2027%5Cu00a0km%5Cu00b2%2C%20and%20the%20total%20length%20of%20watercourses%20increased%20by%20around%202250%5Cu00a0km%20in%20our%20study%20area.%22%2C%22date%22%3A%222023-11-01%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2Fs13280-023-01838-z%22%2C%22ISSN%22%3A%221654-7209%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs13280-023-01838-z%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-12T12%3A20%3A42Z%22%7D%7D%2C%7B%22key%22%3A%223FM693SU%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wang%20et%20al.%22%2C%22parsedDate%22%3A%222023-08-07%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWang%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fica-proc.copernicus.org%5C%2Farticles%5C%2F5%5C%2F25%5C%2F2023%5C%2F%26%23039%3B%26gt%3BRecognition%20and%20Semantic%20Information%20Extraction%20for%20Map%20Based%20on%20Deep%20Learning%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Recognition%20and%20Semantic%20Information%20Extraction%20for%20Map%20Based%20on%20Deep%20Learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yong%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kaixuan%22%2C%22lastName%22%3A%22Du%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xianghong%22%2C%22lastName%22%3A%22Che%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ruiyuan%22%2C%22lastName%22%3A%22Ma%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fu%22%2C%22lastName%22%3A%22Ren%22%7D%5D%2C%22abstractNote%22%3A%22Geospatial%20information%20contained%20in%20maps%20plays%20an%20important%20role%20in%20geographic%20information%20data%20acquisition%2C%20map%20understanding%2C%20intelligent%20mapping%20and%20other%20applications.%20In%20terms%20of%20map%20recognition%20and%20geospatial%20information%20extraction%20from%20maps%2C%20traditional%20methods%20that%20heavily%20rely%20on%20human%20or%20human-computer%20interaction%20for%20semantic%20recognition%20can%20no%20longer%20meet%20the%20real-time%20needs.%20In%20this%20paper%2C%20we%20first%20analysed%20the%20composition%20and%20characteristics%20of%20maps%2C%20and%20then%20systematically%20illustrated%20the%20semantic%20understanding%20methods%20of%20map%20image%20recognition%2C%20target%20detection%20of%20geographic%20features%20and%20semantic%20segmentation%20of%20geographic%20features%20based%20on%20deep%20learning%20architecture%2C%20which%20is%20crucial%20to%20intelligent%20map%20recognition%20and%20mapping.%22%2C%22date%22%3A%222023-08-07%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.5194%5C%2Fica-proc-5-25-2023%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fica-proc.copernicus.org%5C%2Farticles%5C%2F5%5C%2F25%5C%2F2023%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-27T13%3A40%3A20Z%22%7D%7D%2C%7B%22key%22%3A%22MF6UKZWP%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wu%20et%20al.%22%2C%22parsedDate%22%3A%222023-03-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWu%2C%20S.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0924271623000278%26%23039%3B%26gt%3BDomain%20adaptation%20in%20segmenting%20historical%20maps%3A%20A%20weakly%20supervised%20approach%20through%20spatial%20co-occurrence%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Domain%20adaptation%20in%20segmenting%20historical%20maps%3A%20A%20weakly%20supervised%20approach%20through%20spatial%20co-occurrence%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sidi%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Konrad%22%2C%22lastName%22%3A%22Schindler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Magnus%22%2C%22lastName%22%3A%22Heitzler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20maps%20depict%20past%20states%20of%20the%20Earth%5Cu2019s%20surface%20and%20make%20it%20possible%20to%20trace%20the%20natural%20or%20anthropogenic%20evolution%20of%20geographic%20objects%20back%20through%20time.%20However%2C%20the%20state%20of%20the%20depicted%20reality%20is%20not%20the%20only%20source%20of%20change%3A%20maps%20of%20varying%20age%20can%20differ%20in%20terms%20of%20graphical%20design%2C%20and%20also%20in%20terms%20of%20storage%20conditions%2C%20physical%20ageing%20of%20pigments%2C%20and%20the%20scanning%20process%20for%20digitization.%20Consequently%2C%20a%20computer%20vision%20system%20learned%20from%20a%20specific%20%28source%29%20map%20series%20will%20often%20not%20generalize%20well%20to%20older%20or%20newer%20%28target%29%20maps%2C%20calling%20for%20domain%20adaptation.%20In%20the%20present%20paper%20we%20examine%20%5Cu2013%20to%20our%20knowledge%20for%20the%20first%20time%20%5Cu2013%20domain%20adaptation%20for%20segmenting%20historical%20maps.%20We%20argue%20that%20for%20geo-spatial%20data%20like%20maps%2C%20which%20are%20geo-localized%20by%20definition%2C%20the%20spatial%20co-occurrence%20of%20geographical%20objects%20provides%20a%20supervision%20signal%20for%20domain%20adaptation.%20Since%20only%20a%20subset%20of%20all%20mapped%20objects%20co-occur%2C%20and%20even%20those%20are%20not%20perfectly%20aligned%20due%20to%20both%20real%20topographic%20changes%20and%20variations%20in%20map%20generalization%5C%2Fproduction%2C%20they%20only%20provide%20weak%20supervision%20%5Cu2014%20still%20they%20can%20bring%20a%20substantial%20benefit%20over%20completely%20unsupervised%20domain%20adaptation%20methods.%20The%20core%20of%20our%20proposed%20method%20is%20a%20novel%20self-supervised%20co-occurrence%20network%20that%20detects%20co-occurring%20objects%20across%20maps%20%28specifically%2C%20domains%29%20with%20a%20novel%20loss%20function%20that%20allows%20for%20object%20changes%20and%20spatial%20misalignment.%20Experiments%20show%20that%2C%20for%20the%20task%20of%20segmenting%20hydrological%20objects%20such%20as%20rivers%2C%20lakes%20and%20wetlands%2C%20our%20system%20significantly%20outperforms%20two%20state-of-art%20baselines%2C%20even%20with%20limited%20supervision%20%28e.g.%2C%205%25%29.%20The%20source%20code%20is%20publicly%20available%20at%20https%3A%5C%2F%5C%2Fgithub.com%5C%2Fsian-wusidi%5C%2Fspatialcooccurrence.%22%2C%22date%22%3A%222023-03-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.isprsjprs.2023.01.021%22%2C%22ISSN%22%3A%220924-2716%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0924271623000278%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-28T18%3A41%3A19Z%22%7D%7D%2C%7B%22key%22%3A%22KRAZZE6H%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wu%20et%20al.%22%2C%22parsedDate%22%3A%222023%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWu%2C%20S.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3589132.3625572%26%23039%3B%26gt%3BCross-attention%20Spatio-temporal%20Context%20Transformer%20for%20Semantic%20Segmentation%20of%20Historical%20Maps%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Cross-attention%20Spatio-temporal%20Context%20Transformer%20for%20Semantic%20Segmentation%20of%20Historical%20Maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sidi%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yizi%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Konrad%22%2C%22lastName%22%3A%22Schindler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20maps%20provide%20useful%20spatio-temporal%20information%20on%20the%20Earth%26%23039%3Bs%20surface%20before%20modern%20earth%20observation%20techniques%20came%20into%20being.%20To%20extract%20information%20from%20maps%2C%20neural%20networks%2C%20which%20gain%20wide%20popularity%20in%20recent%20years%2C%20have%20replaced%20hand-crafted%20map%20processing%20methods%20and%20tedious%20manual%20labor.%20However%2C%20aleatoric%20uncertainty%2C%20known%20as%20data-dependent%20uncertainty%2C%20inherent%20in%20the%20drawing%5C%2Fscanning%5C%2Ffading%20defects%20of%20the%20original%20map%20sheets%20and%20inadequate%20contexts%20when%20cropping%20maps%20into%20small%20tiles%20considering%20the%20memory%20limits%20of%20the%20training%20process%2C%20challenges%20the%20model%20to%20make%20correct%20predictions.%20As%20aleatoric%20uncertainty%20cannot%20be%20reduced%20even%20with%20more%20training%20data%20collected%2C%20we%20argue%20that%20complementary%20spatio-temporal%20contexts%20can%20be%20helpful.%20To%20achieve%20this%2C%20we%20propose%20a%20U-Net-based%20network%20that%20fuses%20spatio-temporal%20features%20with%20cross-attention%20transformers%20%28U-SpaTem%29%2C%20aggregating%20information%20at%20a%20larger%20spatial%20range%20as%20well%20as%20through%20a%20temporal%20sequence%20of%20images.%20Our%20model%20achieves%20a%20better%20performance%20than%20other%20state-or-art%20models%20that%20use%20either%20temporal%20or%20spatial%20contexts.%20Compared%20with%20pure%20vision%20transformers%2C%20our%20model%20is%20more%20lightweight%20and%20effective.%20To%20the%20best%20of%20our%20knowledge%2C%20leveraging%20both%20spatial%20and%20temporal%20contexts%20have%20been%20rarely%20explored%20before%20in%20the%20segmentation%20task.%20Even%20though%20our%20application%20is%20on%20segmenting%20historical%20maps%2C%20we%20believe%20that%20the%20method%20can%20be%20transferred%20into%20other%20fields%20with%20similar%20problems%20like%20temporal%20sequences%20of%20satellite%20images.%20Our%20code%20is%20freely%20accessible%20at%20https%3A%5C%2F%5C%2Fgithub.com%5C%2Fchenyizi086%5C%2Fwu.2023.sigspatial.git.%22%2C%22date%22%3A%22Dezember%2022%2C%202023%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%2031st%20ACM%20International%20Conference%20on%20Advances%20in%20Geographic%20Information%20Systems%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3589132.3625572%22%2C%22ISBN%22%3A%22979-8-4007-0168-9%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3589132.3625572%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-28T16%3A59%3A27Z%22%7D%7D%2C%7B%22key%22%3A%22GWRSNVJT%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wu%20et%20al.%22%2C%22parsedDate%22%3A%222022-12-31%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWu%2C%20S.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15481603.2021.2023840%26%23039%3B%26gt%3BLeveraging%20uncertainty%20estimation%20and%20spatial%20pyramid%20pooling%20for%20extracting%20hydrological%20features%20from%20scanned%20historical%20topographic%20maps%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Leveraging%20uncertainty%20estimation%20and%20spatial%20pyramid%20pooling%20for%20extracting%20hydrological%20features%20from%20scanned%20historical%20topographic%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sidi%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Magnus%22%2C%22lastName%22%3A%22Heitzler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20maps%20are%20almost%20the%20exclusive%20source%20to%20trace%20back%20the%20characteristics%20of%20earth%20before%20modern%20earth%20observation%20techniques%20came%20into%20being.%20Processing%20historical%20maps%20is%20challenging%20due%20to%20the%20factors%20such%20as%20diverse%20designs%20and%20scales%2C%20or%20inherent%20noise%20from%20painting%2C%20aging%2C%20and%20scanning.%20Our%20paper%20is%20the%20first%20to%20leverage%20uncertainty%20estimation%20under%20the%20framework%20of%20Bayesian%20deep%20learning%20to%20model%20noise%20inherent%20in%20maps%20for%20semantic%20segmentation%20of%20hydrological%20features%20from%20scanned%20topographic%20historical%20maps.%20To%20distinguish%20different%20features%20with%20similar%20symbolization%2C%20we%20integrate%20atrous%20spatial%20pyramid%20pooling%20%28ASPP%29%20to%20incorporate%20multi-scale%20contextual%20information.%20In%20total%2C%20our%20algorithm%20yields%20predictions%20with%20an%20average%20dice%20coefficient%20of%200.827%2C%20improving%20the%20performance%20of%20a%20simple%20U-Net%20by%2026%25.%20Our%20algorithm%20outputs%20intuitively%20interpretable%20pixel-wise%20uncertainty%20maps%20that%20capture%20uncertainty%20in%20object%20boundaries%2C%20noise%20from%20drawing%2C%20aging%2C%20and%20scanning%2C%20as%20well%20as%20out-of-distribution%20designs.%20We%20can%20use%20the%20predicted%20uncertainty%20potentially%20to%20refine%20segmentation%20results%2C%20locate%20rare%20designs%2C%20and%20select%20reliable%20features%20for%20future%20GIS%20analyses.%22%2C%22date%22%3A%222022-12-31%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15481603.2021.2023840%22%2C%22ISSN%22%3A%221548-1603%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15481603.2021.2023840%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A09%3A08Z%22%7D%7D%2C%7B%22key%22%3A%22HQIU5STL%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wong%20et%20al.%22%2C%22parsedDate%22%3A%222022-11-08%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWong%2C%20C.-S.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.nature.com%5C%2Farticles%5C%2Fs41598-022-23364-w%26%23039%3B%26gt%3BSemi-supervised%20learning%20for%20topographic%20map%20analysis%20over%20time%3A%20a%20study%20of%20bridge%20segmentation%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Semi-supervised%20learning%20for%20topographic%20map%20analysis%20over%20time%3A%20a%20study%20of%20bridge%20segmentation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cheng-Shih%22%2C%22lastName%22%3A%22Wong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hsiung-Ming%22%2C%22lastName%22%3A%22Liao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Richard%20Tzong-Han%22%2C%22lastName%22%3A%22Tsai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ming-Ching%22%2C%22lastName%22%3A%22Chang%22%7D%5D%2C%22abstractNote%22%3A%22Geographical%20research%20using%20historical%20maps%20has%20progressed%20considerably%20as%20the%20digitalization%20of%20topological%20maps%20across%20years%20provides%20valuable%20data%20and%20the%20advancement%20of%20AI%20machine%20learning%20models%20provides%20powerful%20analytic%20tools.%20Nevertheless%2C%20analysis%20of%20historical%20maps%20based%20on%20supervised%20learning%20can%20be%20limited%20by%20the%20laborious%20manual%20map%20annotations.%20In%20this%20work%2C%20we%20propose%20a%20semi-supervised%20learning%20method%20that%20can%20transfer%20the%20annotation%20of%20maps%20across%20years%20and%20allow%20map%20comparison%20and%20anthropogenic%20studies%20across%20time.%20Our%20novel%20two-stage%20framework%20first%20performs%20style%20transfer%20of%20topographic%20map%20across%20years%20and%20versions%2C%20and%20then%20supervised%20learning%20can%20be%20applied%20on%20the%20synthesized%20maps%20with%20annotations.%20We%20investigate%20the%20proposed%20semi-supervised%20training%20with%20the%20style-transferred%20maps%20and%20annotations%20on%20four%20widely-used%20deep%20neural%20networks%20%28DNN%29%2C%20namely%20U-Net%2C%20fully-convolutional%20network%20%28FCN%29%2C%20DeepLabV3%2C%20and%20MobileNetV3.%20The%20best%20performing%20network%20of%20U-Net%20achieves%20%24%24F1_%7Binst%3A0.1%7D%20%3D%200.725%24%24and%20%24%24F1_%7Binst%3A0.01%7D%20%3D%200.743%24%24trained%20on%20style-transfer%20synthesized%20maps%2C%20which%20indicates%20that%20the%20proposed%20framework%20is%20capable%20of%20detecting%20target%20features%20%28bridges%29%20on%20historical%20maps%20without%20annotations.%20In%20a%20comprehensive%20comparison%2C%20the%20%24%24F1_%7Binst%3A0.1%7D%24%24of%20U-Net%20trained%20on%20Contrastive%20Unpaired%20Translation%20%28CUT%29%20generated%20dataset%20%28%24%240.662%20%5C%5Cpm%200.008%24%24%29%20achieves%2057.3%20%25%20than%20the%20comparative%20score%20%28%24%240.089%20%5C%5Cpm%200.065%24%24%29%20of%20the%20least%20valid%20configuration%20%28MobileNetV3%20trained%20on%20CycleGAN%20synthesized%20dataset%29.%20We%20also%20discuss%20the%20remaining%20challenges%20and%20future%20research%20directions.%22%2C%22date%22%3A%222022-11-08%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1038%5C%2Fs41598-022-23364-w%22%2C%22ISSN%22%3A%222045-2322%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.nature.com%5C%2Farticles%5C%2Fs41598-022-23364-w%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-11T15%3A49%3A06Z%22%7D%7D%2C%7B%22key%22%3A%223KZNCJW5%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Jiao%20et%20al.%22%2C%22parsedDate%22%3A%222022-09-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BJiao%2C%20C.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1569843222001716%26%23039%3B%26gt%3BA%20fast%20and%20effective%20deep%20learning%20approach%20for%20road%20extraction%20from%20historical%20maps%20by%20automatically%20generating%20training%20data%20with%20symbol%20reconstruction%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20fast%20and%20effective%20deep%20learning%20approach%20for%20road%20extraction%20from%20historical%20maps%20by%20automatically%20generating%20training%20data%20with%20symbol%20reconstruction%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chenjing%22%2C%22lastName%22%3A%22Jiao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Magnus%22%2C%22lastName%22%3A%22Heitzler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20road%20data%20are%20often%20needed%20for%20different%20purposes%2C%20such%20as%20tracking%20the%20evolution%20of%20road%20networks%2C%20spatial%20data%20integration%2C%20and%20urban%20sprawl%20investigation.%20However%2C%20road%20extraction%20from%20historical%20maps%20is%20challenging%20due%20to%20their%20dissatisfying%20quality%2C%20the%20difficulty%20in%20distinguishing%20road%20symbols%20from%20those%20of%20other%20features%20%28e.g.%2C%20isolines%2C%20streams%29%2C%20etc.%20Recently%2C%20although%20deep%20learning%2C%20especially%20deep%20convolutional%20neural%20networks%20%28CNNs%29%2C%20have%20been%20successfully%20applied%20to%20extract%20roads%20from%20remote%20sensing%20images%2C%20road%20extraction%20from%20historical%20maps%20with%20deep%20learning%20is%20rarely%20seen%20in%20existing%20studies.%20Apart%20from%20this%2C%20it%20is%20time-consuming%20and%20laborious%20to%20manually%20label%20large%20amounts%20of%20training%20data.%20To%20bridge%20these%20gaps%2C%20this%20paper%20proposes%20a%20novel%20and%20efficient%20methodology%20to%20automatically%20generate%20training%20data%20through%20symbol%20reconstruction%20for%20road%20extraction.%20The%20proposed%20methodology%20is%20validated%20by%20implementing%20and%20comparing%20four%20training%20scenarios%20using%20the%20Swiss%20Siegfried%20map.%20The%20experiments%20show%20that%20imitation%20maps%20generated%20by%20symbol%20reconstruction%20are%20especially%20useful%20in%20two%20cases.%20First%2C%20if%20little%20manually%20labelled%20training%20data%20are%20available%2C%20models%20trained%20on%20imitation%20maps%20alone%20can%20already%20provide%20satisfactory%20road%20extraction%20results.%20Second%2C%20when%20training%20data%20from%20imitation%20maps%20are%20mixed%20with%20real%20training%20data%2C%20the%20resulting%20models%20even%20outperform%20the%20models%20trained%20on%20real%20data%20alone%20for%20some%20metrics%2C%20thus%20indicating%20that%20imitation%20maps%20can%20be%20a%20highly%20valuable%20addition.%20This%20research%20provides%20a%20new%20insight%20for%20fast%20and%20effective%20road%20extraction%20from%20historical%20maps%20using%20deep%20learning.%22%2C%22date%22%3A%222022-09-01%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.jag.2022.102980%22%2C%22ISSN%22%3A%221569-8432%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1569843222001716%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A04%3A07Z%22%7D%7D%2C%7B%22key%22%3A%222NU5SN32%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ran%20et%20al.%22%2C%22parsedDate%22%3A%222022-08%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BRan%2C%20W.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F8%5C%2F439%26%23039%3B%26gt%3BRaster%20Map%20Line%20Element%20Extraction%20Method%20Based%20on%20Improved%20U-Net%20Network%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Raster%20Map%20Line%20Element%20Extraction%20Method%20Based%20on%20Improved%20U-Net%20Network%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenjing%22%2C%22lastName%22%3A%22Ran%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiasheng%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kun%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ling%22%2C%22lastName%22%3A%22Bai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xun%22%2C%22lastName%22%3A%22Rao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhe%22%2C%22lastName%22%3A%22Zhao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chunxiao%22%2C%22lastName%22%3A%22Xu%22%7D%5D%2C%22abstractNote%22%3A%22To%20address%20the%20problem%20of%20low%20accuracy%20in%20line%20element%20recognition%20of%20raster%20maps%20due%20to%20text%20and%20background%20interference%2C%20we%20propose%20a%20raster%20map%20line%20element%20recognition%20method%20based%20on%20an%20improved%20U-Net%20network%20model%2C%20combining%20the%20semantic%20segmentation%20algorithm%20of%20deep%20learning%2C%20the%20attention%20gates%20%28AG%29%20module%2C%20and%20the%20atrous%20spatial%20pyramid%20pooling%20%28ASPP%29%20module.%20In%20the%20proposed%20network%20model%2C%20the%20encoder%20extracts%20image%20features%2C%20the%20decoder%20restores%20the%20extracted%20features%2C%20the%20features%20of%20different%20scales%20are%20extracted%20in%20the%20dilated%20convolution%20module%20between%20the%20encoder%20and%20the%20decoder%2C%20and%20the%20attention%20mechanism%20module%20increases%20the%20weight%20of%20line%20elements.%20The%20comparison%20experiment%20was%20carried%20out%20through%20the%20constructed%20line%20element%20recognition%20dataset.%20The%20experimental%20results%20show%20that%20the%20improved%20U-Net%20network%20accuracy%20rate%20is%2093.08%25%2C%20the%20recall%20rate%20is%2092.29%25%2C%20the%20DSC%20accuracy%20is%2093.03%25%2C%20and%20the%20F1-score%20is%2092.68%25.%20In%20the%20network%20robustness%20test%2C%20under%20different%20signal-to-noise%20ratios%20%28SNRs%29%2C%20comparing%20the%20improved%20network%20structure%20with%20the%20original%20network%20structure%2C%20the%20DSC%20improved%20by%2013.18%5Cu201317.05%25.%20These%20results%20show%20that%20the%20network%20model%20proposed%20in%20this%20paper%20can%20effectively%20extract%20raster%20map%20line%20elements.%22%2C%22date%22%3A%222022%5C%2F8%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi11080439%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F8%5C%2F439%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A39%3A18Z%22%7D%7D%2C%7B%22key%22%3A%222XYU3YZ5%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Jiao%20et%20al.%22%2C%22parsedDate%22%3A%222022-05-17%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BJiao%2C%20C.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fisprs-annals.copernicus.org%5C%2Farticles%5C%2FV-2-2022%5C%2F423%5C%2F2022%5C%2F%26%23039%3B%26gt%3BA%20Novel%20Data%20Augmentation%20Method%20to%20Enhance%20the%20Training%20Dataset%20for%20Road%20Extraction%20from%20Swiss%20Historical%20Maps%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20Novel%20Data%20Augmentation%20Method%20to%20Enhance%20the%20Training%20Dataset%20for%20Road%20Extraction%20from%20Swiss%20Historical%20Maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22C.%22%2C%22lastName%22%3A%22Jiao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22M.%22%2C%22lastName%22%3A%22Heitzler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22L.%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22Long-term%20retrospective%20road%20data%20are%20required%20for%20various%20analyses%20%28e.g.%2C%20investigation%20of%20urban%20sprawl%2C%20analysis%20of%20road%20network%20evolution%29.%20Yet%2C%20it%20is%20challenging%20to%20extract%20roads%20from%20scanned%20historical%20maps%20due%20to%20their%20dissatisfying%20quality.%20Although%20deep%20learning%20has%20been%20exerting%20its%20superiority%20in%20image%20segmentation%2C%20its%20application%20to%20road%20extraction%20from%20historical%20maps%20is%20rarely%20seen%20in%20existing%20studies.%20Deep%20learning%20usually%20requires%20quite%20large%20amounts%20of%20training%20data%2C%20which%20is%20time-consuming%20and%20tedious%20to%20label.%20Data%20augmentation%20can%20to%20some%20extent%20solve%20this%20issue.%20The%20existing%20data%20augmentation%20techniques%20vary%20each%20training%20sample%20as%20a%20whole%20%28e.g.%2C%20rotation%2C%20flipping%29.%20But%20some%20features%20or%20symbols%20on%20maps%20will%20never%20occur%20in%20practice%20when%20they%20are%20rotated%20or%20flipped%20%28e.g.%2C%20numbers%2C%20labels%29.%20To%20solve%20this%20problem%20and%20to%20further%20improve%20the%20diversity%20of%20training%20samples%2C%20we%20propose%20a%20novel%20data%20augmentation%20method%2C%20which%20varies%20the%20target%20features%20instead%20of%20the%20whole%20training%20sample.%20The%20method%20is%20validated%20by%20applying%20it%20to%20road%20extraction%20from%20the%20historical%20Swiss%20Siegfried%20map.%20The%20experiment%20results%20show%20the%20effectiveness%20of%20the%20proposed%20method.%22%2C%22date%22%3A%222022-05-17%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.5194%5C%2Fisprs-annals-V-2-2022-423-2022%22%2C%22ISSN%22%3A%222194-9050%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fisprs-annals.copernicus.org%5C%2Farticles%5C%2FV-2-2022%5C%2F423%5C%2F2022%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-07-31T16%3A30%3A17Z%22%7D%7D%2C%7B%22key%22%3A%22YASSTN3E%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Avc%5Cu0131%20et%20al.%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BAvc%5Cu0131%2C%20C.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9882054%26%23039%3B%26gt%3BDeep%20Learning-Based%20Road%20Extraction%20From%20Historical%20Maps%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep%20Learning-Based%20Road%20Extraction%20From%20Historical%20Maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cengiz%22%2C%22lastName%22%3A%22Avc%5Cu0131%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Elif%22%2C%22lastName%22%3A%22Sertel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mustafa%20Erdem%22%2C%22lastName%22%3A%22Kabaday%5Cu0131%22%7D%5D%2C%22abstractNote%22%3A%22Automatic%20road%20extraction%20from%20historical%20maps%20is%20an%20important%20task%20to%20understand%20past%20transportation%20conditions%20and%20conduct%20spatiotemporal%20analysis%20revealing%20information%20about%20historical%20events%20and%20human%20activities%20over%20the%20years.%20This%20research%20aimed%20to%20propose%20the%20ideal%20architecture%2C%20encoder%2C%20and%20hyperparameter%20settings%20for%20the%20historical%20road%20extraction%20task.%20We%20used%20a%20dataset%20including%207076%20patches%20with%20the%20size%20of%20%24256%20%5C%5Ctimes256%24%20pixels%20generated%20from%20scanned%20historical%20Deutsche%20Heereskarte%201%3A200%20000%20T%5Cu00fcrkei%20%28DHK%20200%20Turkey%29%20maps%20and%20their%20corresponding%20digitized%20ground%20truth%20masks%20for%20five%20different%20roads%20types.%20We%20first%20tested%20the%20widely%20used%20Unet%2B%2B%20and%20Deeplabv3%20architectures.%20We%20also%20evaluated%20the%20contribution%20of%20attention%20models%20by%20implementing%20Unet%2B%2B%20with%20the%20concurrent%20spatial%20and%20channel-squeeze%20and%20excitation%20block%20and%20multiscale%20attention%20net.%20We%20achieved%20the%20best%20results%20with%20split-attention%20network%20%28Timm-resnest200e%29%20encoder%20and%20Unet%2B%2B%20architecture%2C%20with%2098.99%25%20overall%20accuracy%2C%2041.99%25%20intersection%20of%20union%2C%2051.41%25%20precision%2C%2069.7%25%20recall%2C%20and%2057.72%25%20F1%20score%20values.%20Our%20output%20weights%20could%20be%20directly%20used%20for%20the%20inference%20of%20other%20DHK%20maps%20and%20transfer%20learning%20for%20similar%20or%20different%20historical%20maps.%20The%20proposed%20architecture%20could%20also%20be%20implemented%20in%20different%20road%20extraction%20studies.%22%2C%22date%22%3A%222022%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FLGRS.2022.3204817%22%2C%22ISSN%22%3A%221558-0571%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9882054%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A14%3A59Z%22%7D%7D%2C%7B%22key%22%3A%227ZD65BY7%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Mao%20et%20al.%22%2C%22parsedDate%22%3A%222021-10-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BMao%2C%20X.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS136481522100178X%26%23039%3B%26gt%3BDeep%20learning-enhanced%20extraction%20of%20drainage%20networks%20from%20digital%20elevation%20models%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep%20learning-enhanced%20extraction%20of%20drainage%20networks%20from%20digital%20elevation%20models%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xin%22%2C%22lastName%22%3A%22Mao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jun%20Kang%22%2C%22lastName%22%3A%22Chow%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhaoyu%22%2C%22lastName%22%3A%22Su%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yu-Hsing%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiaye%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tao%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tiejian%22%2C%22lastName%22%3A%22Li%22%7D%5D%2C%22abstractNote%22%3A%22Drainage%20network%20extraction%20is%20essential%20for%20different%20research%20and%20applications.%20However%2C%20traditional%20methods%20have%20low%20efficiency%2C%20low%20accuracy%20for%20flat%20regions%2C%20and%20difficulties%20in%20detecting%20channel%20heads.%20Although%20deep%20learning%20techniques%20have%20been%20used%20to%20solve%20these%20problems%2C%20different%20challenges%20remain%20unsolved.%20Therefore%2C%20we%20introduced%20distributed%20representations%20of%20aspect%20features%20to%20facilitate%20the%20deep%20learning%20model%20calculating%20the%20flow%20direction%3B%20adopted%20a%20semantic%20segmentation%20model%2C%20U-Net%2C%20to%20improve%20the%20accuracy%20and%20efficiency%20in%20predicting%20flow%20directions%20and%20in%20pixel%20classifications%3B%20and%20used%20postprocessing%20to%20delineate%20the%20flowlines.%20Our%20proposed%20framework%20achieved%20state-of-the-art%20results%20compared%20with%20the%20traditional%20methods%20and%20the%20published%20deep-learning-based%20methods.%20Further%2C%20case%20study%20results%20demonstrated%20that%20our%20framework%20can%20extract%20drainage%20networks%20with%20high%20accuracy%20for%20rivers%20of%20different%20widths%20flowing%20through%20terrains%20of%20different%20characteristics.%20This%20framework%2C%20requiring%20no%20parameters%20provided%20by%20users%2C%20can%20also%20produce%20waterbody%20polygons%20and%20allow%20cyclic%20graphs%20in%20the%20drainage%20network.%22%2C%22date%22%3A%222021-10-01%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.envsoft.2021.105135%22%2C%22ISSN%22%3A%221364-8152%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS136481522100178X%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A12%3A41Z%22%7D%7D%2C%7B%22key%22%3A%22EZZFBNKQ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ekim%20et%20al.%22%2C%22parsedDate%22%3A%222021-08%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BEkim%2C%20B.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F10%5C%2F8%5C%2F492%26%23039%3B%26gt%3BAutomatic%20Road%20Extraction%20from%20Historical%20Maps%20Using%20Deep%20Learning%20Techniques%3A%20A%20Regional%20Case%20Study%20of%20Turkey%20in%20a%20German%20World%20War%20II%20Map%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Automatic%20Road%20Extraction%20from%20Historical%20Maps%20Using%20Deep%20Learning%20Techniques%3A%20A%20Regional%20Case%20Study%20of%20Turkey%20in%20a%20German%20World%20War%20II%20Map%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Burak%22%2C%22lastName%22%3A%22Ekim%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Elif%22%2C%22lastName%22%3A%22Sertel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22M.%20Erdem%22%2C%22lastName%22%3A%22Kabaday%5Cu0131%22%7D%5D%2C%22abstractNote%22%3A%22Scanned%20historical%20maps%20are%20available%20from%20different%20sources%20in%20various%20scales%20and%20contents.%20Automatic%20geographical%20feature%20extraction%20from%20these%20historical%20maps%20is%20an%20essential%20task%20to%20derive%20valuable%20spatial%20information%20on%20the%20characteristics%20and%20distribution%20of%20transportation%20infrastructures%20and%20settlements%20and%20to%20conduct%20quantitative%20and%20geometrical%20analysis.%20In%20this%20research%2C%20we%20used%20the%20Deutsche%20Heereskarte%201%3A200%2C000%20T%5Cu00fcrkei%20%28DHK%20200%20Turkey%29%20maps%20as%20the%20base%20geoinformation%20source%20to%20construct%20the%20past%20transportation%20networks%20using%20the%20deep%20learning%20approach.%20Five%20different%20road%20types%20were%20digitized%20and%20labeled%20to%20be%20used%20as%20inputs%20for%20the%20proposed%20deep%20learning-based%20segmentation%20approach.%20We%20adapted%20U-Net%2B%2B%20and%20ResneXt50_32%5Cu00d74d%20architectures%20to%20produce%20multi-class%20segmentation%20masks%20and%20perform%20feature%20extraction%20to%20determine%20various%20road%20types%20accurately.%20We%20achieved%20remarkable%20results%2C%20with%2098.73%25%20overall%20accuracy%2C%2041.99%25%20intersection%20of%20union%2C%20and%2046.61%25%20F1%20score%20values.%20The%20proposed%20method%20can%20be%20implemented%20in%20DHK%20maps%20of%20different%20countries%20to%20automatically%20extract%20different%20road%20types%20and%20used%20for%20transfer%20learning%20of%20different%20historical%20maps.%22%2C%22date%22%3A%222021%5C%2F8%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi10080492%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F10%5C%2F8%5C%2F492%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A50%3A40Z%22%7D%7D%2C%7B%22key%22%3A%2283R7FCSV%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Satari%20et%20al.%22%2C%22parsedDate%22%3A%222021-06-04%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSatari%2C%20R.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fagile-giss.copernicus.org%5C%2Farticles%5C%2F2%5C%2F11%5C%2F2021%5C%2F%26%23039%3B%26gt%3BExtraction%20of%20linear%20structures%20from%20digital%20terrain%20models%20using%20deep%20learning%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Extraction%20of%20linear%20structures%20from%20digital%20terrain%20models%20using%20deep%20learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ramish%22%2C%22lastName%22%3A%22Satari%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bashir%22%2C%22lastName%22%3A%22Kazimi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Monika%22%2C%22lastName%22%3A%22Sester%22%7D%5D%2C%22abstractNote%22%3A%22This%20paper%20explores%20the%20role%20deep%20convolutional%20neural%20networks%20play%20in%20automated%20extraction%20of%20linear%20structures%20using%20semantic%20segmentation%20techniques%20in%20Digital%20Terrain%20Models%20%28DTMs%29.%20DTM%20is%20a%20regularly%20gridded%20raster%20created%20from%20laser%20scanning%20point%20clouds%20and%20represents%20elevations%20of%20the%20bare%20earth%20surface%20with%20respect%20to%20a%20reference.%20Recent%20advances%20in%20Deep%20Learning%20%28DL%29%20have%20made%20it%20possible%20to%20explore%20the%20use%20of%20semantic%20segmentation%20for%20detection%20of%20terrain%20structures%20in%20DTMs.%20This%20research%20examines%20two%20novel%20and%20practical%20deep%20convolutional%20neural%20network%20architectures%20i.e.%20an%20encoder-decoder%20network%20named%20as%20SegNet%20and%20the%20recent%20state-of-the-art%20high-resolution%20network%20%28HRNet%29.%20This%20paper%20initially%20focuses%20on%20the%20pixel-wise%20binary%20classification%20in%20order%20to%20validate%20the%20applicability%20of%20the%20proposed%20approaches.%20The%20networks%20are%20trained%20to%20distinguish%20between%20points%20belonging%20to%20linear%20structures%20and%20those%20belonging%20to%20background.%20In%20the%20second%20step%2C%20multi-class%20segmentation%20is%20carried%20out%20on%20the%20same%20DTM%20dataset.%20The%20model%20is%20trained%20to%20not%20only%20detect%20a%20linear%20feature%2C%20but%20also%20to%20categorize%20it%20as%20one%20of%20the%20classes%3A%20hollow%20ways%2C%20roads%2C%20forest%20paths%2C%20historical%20paths%2C%20and%20streams.%20Results%20of%20the%20experiment%20in%20addition%20to%20the%20quantitative%20and%20qualitative%20analysis%20show%20the%20applicability%20of%20deep%20neural%20networks%20for%20detection%20of%20terrain%20structures%20in%20DTMs.%20From%20the%20deep%20learning%20models%20utilized%2C%20HRNet%20gives%20better%20results.%22%2C%22date%22%3A%222021%5C%2F06%5C%2F04%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.5194%5C%2Fagile-giss-2-11-2021%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fagile-giss.copernicus.org%5C%2Farticles%5C%2F2%5C%2F11%5C%2F2021%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-07-31T16%3A33%3A17Z%22%7D%7D%2C%7B%22key%22%3A%22I8GBRRPK%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Lenc%20et%20al.%22%2C%22parsedDate%22%3A%222021%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLenc%2C%20L.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-030-86198-8_4%26%23039%3B%26gt%3BBorder%20Detection%20for%20Seamless%20Connection%20of%20Historical%20Cadastral%20Maps%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Border%20Detection%20for%20Seamless%20Connection%20of%20Historical%20Cadastral%20Maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ladislav%22%2C%22lastName%22%3A%22Lenc%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Martin%22%2C%22lastName%22%3A%22Prantl%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ji%5Cu0159%5Cu00ed%22%2C%22lastName%22%3A%22Mart%5Cu00ednek%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pavel%22%2C%22lastName%22%3A%22Kr%5Cu00e1l%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Elisa%20H.%22%2C%22lastName%22%3A%22Barney%20Smith%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Umapada%22%2C%22lastName%22%3A%22Pal%22%7D%5D%2C%22abstractNote%22%3A%22This%20paper%20presents%20a%20set%20of%20methods%20for%20detection%20of%20important%20features%20in%20historical%20cadastral%20maps.%20The%20goal%20is%20to%20allow%20a%20seamless%20connection%20of%20the%20maps%20based%20on%20such%20features.%20The%20connection%20is%20very%20important%20so%20that%20the%20maps%20can%20be%20presented%20online%20and%20utilized%20easily.%20To%20the%20best%20of%20our%20knowledge%2C%20this%20is%20the%20first%20attempt%20to%20solve%20this%20task%20fully%20automatically.%20Compared%20to%20the%20manual%20annotation%20which%20is%20very%20time-consuming%20we%20can%20significantly%20reduce%20the%20costs%20and%20provide%20comparable%20or%20even%20better%20results.%22%2C%22date%22%3A%222021%22%2C%22proceedingsTitle%22%3A%22Document%20Analysis%20and%20Recognition%20%5Cu2013%20ICDAR%202021%20Workshops%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2F978-3-030-86198-8_4%22%2C%22ISBN%22%3A%22978-3-030-86198-8%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-030-86198-8_4%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-21T13%3A38%3A37Z%22%7D%7D%2C%7B%22key%22%3A%22EVF393MF%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Petitpierre%20et%20al.%22%2C%22parsedDate%22%3A%222021%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BPetitpierre%2C%20R.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fceur-ws.org%5C%2FVol-2989%5C%2F%26%23039%3B%26gt%3BGeneric%20Semantic%20Segmentation%20of%20Historical%20Maps%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Generic%20Semantic%20Segmentation%20of%20Historical%20Maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22R%5Cu00e9mi%22%2C%22lastName%22%3A%22Petitpierre%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fr%5Cu00e9d%5Cu00e9ric%22%2C%22lastName%22%3A%22Kaplan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Isabella%22%2C%22lastName%22%3A%22di%20Lenardo%22%7D%5D%2C%22abstractNote%22%3A%22Research%20in%20automatic%20map%20processing%20is%20largely%20focused%20on%20homogeneous%20corpora%20or%20even%20individual%20maps%2C%20leading%20to%20inflexible%20models.%20Based%20on%20two%20new%20corpora%2C%20the%20first%20one%20centered%20on%20maps%20of%20Paris%20and%20the%20second%20one%20gathering%20maps%20of%20cities%20from%20all%20over%20the%20world%2C%20we%20present%20a%20method%20for%20computing%20the%20figurative%20diversity%20of%20cartographic%20collections.%20In%20a%20second%20step%2C%20we%20discuss%20the%20actual%20opportunities%20for%20CNN-based%20semantic%20segmentation%20of%20historical%20city%20maps.%20Through%20several%20experiments%2C%20we%20analyze%20the%20impact%20of%20figurative%20and%20cultural%20diversity%20on%20the%20segmentation%20performance.%20Finally%2C%20we%20highlight%20the%20potential%20for%20large-scale%20and%20generic%20algorithms.%20Training%20data%20and%20code%20of%20the%20described%20algorithms%20are%20made%20open-source%20and%20published%20with%20this%20article.%22%2C%22date%22%3A%222021%22%2C%22proceedingsTitle%22%3A%22CEUR%20Workshop%20Proceedings%22%2C%22conferenceName%22%3A%22CHR%202021%3A%20Computational%20Humanities%20Research%20Conference%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fceur-ws.org%5C%2FVol-2989%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A34%3A57Z%22%7D%7D%2C%7B%22key%22%3A%22G4KM2XB8%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Can%20et%20al.%22%2C%22parsedDate%22%3A%222021%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BCan%2C%20Y.S.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9410537%26%23039%3B%26gt%3BAutomatic%20Detection%20of%20Road%20Types%20From%20the%20Third%20Military%20Mapping%20Survey%20of%20Austria-Hungary%20Historical%20Map%20Series%20With%20Deep%20Convolutional%20Neural%20Networks%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Automatic%20Detection%20of%20Road%20Types%20From%20the%20Third%20Military%20Mapping%20Survey%20of%20Austria-Hungary%20Historical%20Map%20Series%20With%20Deep%20Convolutional%20Neural%20Networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yekta%20Said%22%2C%22lastName%22%3A%22Can%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Petrus%20Johannes%22%2C%22lastName%22%3A%22Gerrits%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22M.%20Erdem%22%2C%22lastName%22%3A%22Kabadayi%22%7D%5D%2C%22abstractNote%22%3A%22With%20the%20increased%20amount%20of%20digitized%20historical%20documents%2C%20information%20extraction%20from%20them%20gains%20pace.%20Historical%20maps%20contain%20valuable%20information%20about%20historical%2C%20geographical%20and%20economic%20aspects%20of%20an%20era.%20Retrieving%20information%20from%20historical%20maps%20is%20more%20challenging%20than%20processing%20modern%20maps%20due%20to%20lower%20image%20quality%2C%20degradation%20of%20documents%20and%20the%20massive%20amount%20of%20non-annotated%20digital%20map%20archives.%20Convolutional%20Neural%20Networks%20%28CNN%29%20solved%20many%20image%20processing%20challenges%20with%20great%20success%2C%20but%20they%20require%20a%20vast%20amount%20of%20annotated%20data.%20For%20historical%20maps%2C%20this%20means%20an%20unprecedented%20scale%20of%20manual%20data%20entry%20and%20annotation.%20In%20this%20study%2C%20we%20first%20manually%20annotated%20the%20Third%20Military%20Mapping%20Survey%20of%20Austria-Hungary%20historical%20map%20series%20conducted%20between%201884%20and%201918%20and%20made%20them%20publicly%20accessible.%20We%20recognized%20different%20road%20types%20and%20their%20pixel-wise%20positions%20automatically%20by%20using%20a%20CNN%20architecture%20and%20achieved%20promising%20results.%22%2C%22date%22%3A%222021%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FACCESS.2021.3074897%22%2C%22ISSN%22%3A%222169-3536%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9410537%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-26T19%3A22%3A37Z%22%7D%7D%2C%7B%22key%22%3A%22DN5EFLS9%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yang%20et%20al.%22%2C%22parsedDate%22%3A%222021%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYang%2C%20X.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Flink.springer.com%5C%2Fchapter%5C%2F10.1007%5C%2F978-3-030-67540-0_12%26%23039%3B%26gt%3BT2I-CycleGAN%3A%20A%20CycleGAN%20for%20Maritime%20Road%20Network%20Extraction%20from%20Crowdsourcing%20Spatio-Temporal%20AIS%20Trajectory%20Data%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22T2I-CycleGAN%3A%20A%20CycleGAN%20for%20Maritime%20Road%20Network%20Extraction%20from%20Crowdsourcing%20Spatio-Temporal%20AIS%20Trajectory%20Data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xuankai%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guiling%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiahao%22%2C%22lastName%22%3A%22Yan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jing%22%2C%22lastName%22%3A%22Gao%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Honghao%22%2C%22lastName%22%3A%22Gao%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Xinheng%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Muddesar%22%2C%22lastName%22%3A%22Iqbal%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Yuyu%22%2C%22lastName%22%3A%22Yin%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Jianwei%22%2C%22lastName%22%3A%22Yin%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Ning%22%2C%22lastName%22%3A%22Gu%22%7D%5D%2C%22abstractNote%22%3A%22Maritime%20road%20network%20is%20composed%20of%20detailed%20maritime%20routes%20and%20is%20vital%20in%20many%20applications%20such%20as%20threats%20detection%2C%20traffic%20control.%20However%2C%20the%20vessel%20trajectory%20data%2C%20or%20Automatic%20Identification%20System%20%28AIS%29%20data%2C%20are%20usually%20large%20in%20scale%20and%20collected%20with%20different%20sampling%20rates.%20And%2C%20what%5Cu2019s%20more%2C%20it%20is%20difficult%20to%20obtain%20enough%20accurate%20road%20networks%20as%20paired%20training%20datasets.%20It%20is%20a%20huge%20challenge%20to%20extract%20a%20complete%20maritime%20road%20network%20from%20such%20data%20that%20matches%20the%20actual%20route%20of%20the%20ship.%20In%20order%20to%20solve%20these%20problems%2C%20this%20paper%20proposes%20an%20unsupervised%20learning-based%20maritime%20road%20network%20extraction%20model%20T2I-CycleGAN%20based%20on%20CycleGAN.%20The%20method%20translates%20trajectory%20data%20into%20unpaired%20input%20samples%20for%20model%20training%2C%20and%20adds%20dense%20layer%20to%20the%20CycleGAN%20model%20to%20handle%20trajectories%20with%20different%20sampling%20rates.%20We%20evaluate%20the%20approach%20on%20real-world%20AIS%20datasets%20in%20various%20areas%20and%20compare%20the%20extracted%20results%20with%20the%20real%20ship%20coordinate%20data%20in%20terms%20of%20connectivity%20and%20details%2C%20achieving%20effectiveness%20beyond%20the%20most%20related%20work.%22%2C%22date%22%3A%222021%22%2C%22proceedingsTitle%22%3A%22Collaborative%20Computing%3A%20Networking%2C%20Applications%20and%20Worksharing%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2F978-3-030-67540-0_12%22%2C%22ISBN%22%3A%22978-3-030-67540-0%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Flink.springer.com%5C%2Fchapter%5C%2F10.1007%5C%2F978-3-030-67540-0_12%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A13%3A32Z%22%7D%7D%2C%7B%22key%22%3A%22VKD4NKLL%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ma%20et%20al.%22%2C%22parsedDate%22%3A%222020-11-10%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BMa%2C%20J.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.spiedigitallibrary.org%5C%2Fconference-proceedings-of-spie%5C%2F11584%5C%2F115841J%5C%2FAutomatic-identification-method-of-overpasses-based-on-deep-learning%5C%2F10.1117%5C%2F12.2579387.full%26%23039%3B%26gt%3BAutomatic%20identification%20method%20of%20overpasses%20based%20on%20deep%20learning%26lt%3B%5C%2Fa%26gt%3B.%202020%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Automatic%20identification%20method%20of%20overpasses%20based%20on%20deep%20learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jingzhen%22%2C%22lastName%22%3A%22Ma%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bowei%22%2C%22lastName%22%3A%22Wen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fubing%22%2C%22lastName%22%3A%22Zhang%22%7D%5D%2C%22abstractNote%22%3A%22The%20automatic%20identification%20of%20overpass%20structures%20is%20of%20great%20significance%20for%20multi-scale%20modeling%2C%20spatial%20analysis%2C%20and%20vehicle%20navigation%20of%20road%20networks.%20The%20traditional%20method%20of%20overpass%20recognition%20based%20on%20vector%20data%20relies%20too%20heavily%20on%20the%20characteristics%20of%20manual%20design%20and%20has%20poor%20adaptability%20to%20complex%20scenes.%20In%20this%20paper%2C%20a%20method%20for%20overpass%20identification%20based%20on%20the%20target%20detection%20model%20Faster%20R-CNN%20%28Regions%20with%20Convolutional%20Neural%20Network%29%20is%20proposed.%20This%20method%20uses%20a%20Convolutional%20Neural%20Network%20to%20learn%20the%20deep%20structural%20characteristics%20of%20data%20samples%2C%20and%20then%20automatically%20identifies%20and%20finds%20accurate%20positioning%20of%20the%20overpasses.%20The%20experimental%20results%20show%20that%20this%20method%20is%20able%20to%20identify%20overpasses%20and%20can%20accurately%20determine%20their%20positions%20in%20a%20complex%20road%20network%2C%20avoiding%20the%20influence%20of%20human%20intervention%20on%20the%20uncertainty%20of%20results.%20This%20method%20also%20has%20strong%20anti-interference%20abilities%22%2C%22date%22%3A%222020%5C%2F11%5C%2F10%22%2C%22proceedingsTitle%22%3A%222020%20International%20Conference%20on%20Image%2C%20Video%20Processing%20and%20Artificial%20Intelligence%22%2C%22conferenceName%22%3A%222020%20International%20Conference%20on%20Image%2C%20Video%20Processing%20and%20Artificial%20Intelligence%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1117%5C%2F12.2579387%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.spiedigitallibrary.org%5C%2Fconference-proceedings-of-spie%5C%2F11584%5C%2F115841J%5C%2FAutomatic-identification-method-of-overpasses-based-on-deep-learning%5C%2F10.1117%5C%2F12.2579387.full%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A10%3A51Z%22%7D%7D%5D%7D
Kramm, T. et al. Deep learning-based extraction of Kenya's historical road network from topographic maps. 2025
Duan, W. et al. DIGMAPPER: A Modular System for Automated Geologic Map Digitization. 2025
López-Rauhut, M. et al. Segmenting France Across Four Centuries. 2025
Vynikal, J. et al. Automatic Elevation Contour Vectorization: A Case Study in a Deep Learning Approach. 2025
Kurochkin, V. et al. U-Net Models Enhanced by Generated Training Data for Automatic Isolines Extraction. 2025
Sertel, E. et al. Automatic Road Extraction from Historical Maps Using Transformer-Based SegFormers. 2024
Chen, Y. et al. Automatic vectorization of historical maps: A benchmark. 2024
Mäyrä, J. et al. Utilizing historical maps in identification of long-term land use and land cover changes. 2023
Wang, Y. et al. Recognition and Semantic Information Extraction for Map Based on Deep Learning. 2023
Wong, C.-S. et al. Semi-supervised learning for topographic map analysis over time: a study of bridge segmentation. 2022
Ran, W. et al. Raster Map Line Element Extraction Method Based on Improved U-Net Network. 2022
Avcı, C. et al. Deep Learning-Based Road Extraction From Historical Maps. 2022
Mao, X. et al. Deep learning-enhanced extraction of drainage networks from digital elevation models. 2021
Satari, R. et al. Extraction of linear structures from digital terrain models using deep learning. 2021
Lenc, L. et al. Border Detection for Seamless Connection of Historical Cadastral Maps. 2021
Petitpierre, R. et al. Generic Semantic Segmentation of Historical Maps. 2021
Ma, J. et al. Automatic identification method of overpasses based on deep learning. 2020
Feature Extraction (Areas)
5447768
feature extraction, areas
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22LK65CZS3%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Xia%20et%20al.%22%2C%22parsedDate%22%3A%222025-12-31%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BXia%2C%20X.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15481603.2025.2494883%26%23039%3B%26gt%3BMapSAM%3A%20adapting%20segment%20anything%20model%20for%20automated%20feature%20detection%20in%20historical%20maps%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22MapSAM%3A%20adapting%20segment%20anything%20model%20for%20automated%20feature%20detection%20in%20historical%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xue%22%2C%22lastName%22%3A%22Xia%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Daiwei%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenxuan%22%2C%22lastName%22%3A%22Song%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wei%22%2C%22lastName%22%3A%22Huang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22Automated%20feature%20detection%20in%20historical%20maps%20can%20significantly%20accelerate%20the%20reconstruction%20of%20the%20geospatial%20past.%20However%2C%20this%20process%20is%20often%20constrained%20by%20the%20time-consuming%20task%20of%20manually%20digitizing%20sufficient%20high-quality%20training%20data.%20The%20emergence%20of%20visual%20foundation%20models%2C%20such%20as%20the%20Segment%20Anything%20Model%20%28SAM%29%2C%20offers%20a%20promising%20solution%20due%20to%20their%20remarkable%20generalization%20capabilities%20and%20rapid%20adaptation%20to%20new%20data%20distributions.%20Despite%20this%2C%20directly%20applying%20SAM%20in%20a%20zero-shot%20manner%20to%20historical%20map%20segmentation%20poses%20significant%20challenges%2C%20including%20poor%20recognition%20of%20certain%20geospatial%20features%20and%20a%20reliance%20on%20input%20prompts%2C%20which%20limits%20its%20ability%20to%20be%20fully%20automated.%20To%20address%20these%20challenges%2C%20we%20introduce%20MapSAM%2C%20a%20parameter-efficient%20fine-tuning%20strategy%20that%20adapts%20SAM%20into%20a%20prompt-free%20and%20versatile%20solution%20for%20various%20downstream%20historical%20map%20segmentation%20tasks.%20Specifically%2C%20we%20employ%20Weight-Decomposed%20Low-Rank%20Adaptation%20%28DoRA%29%20to%20integrate%20domain-specific%20knowledge%20into%20the%20image%20encoder.%20Additionally%2C%20we%20develop%20an%20automatic%20prompt%20generation%20process%2C%20eliminating%20the%20need%20for%20manual%20input.%20We%20further%20enhance%20the%20positional%20prompt%20in%20SAM%2C%20transforming%20it%20into%20a%20higher-level%20positional-semantic%20prompt%2C%20and%20modify%20the%20cross-attention%20mechanism%20in%20the%20mask%20decoder%20with%20masked%20attention%20for%20more%20effective%20feature%20aggregation.%20The%20proposed%20MapSAM%20framework%20demonstrates%20promising%20performance%20across%20three%20distinct%20historical%20map%20segmentation%20tasks%3A%20railway%2C%20vineyard%2C%20and%20building%20block%20detection.%20Experimental%20results%20show%20that%20it%20adapts%20well%20to%20various%20features%2C%20even%20when%20fine-tuned%20with%20extremely%20limited%20data%20%28e.g.%2010%20shots%29.%20The%20code%20is%20available%20at%20https%3A%5C%2F%5C%2Fgithub.com%5C%2FXue-Xia%5C%2FMapSAM.%22%2C%22date%22%3A%222025-12-31%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15481603.2025.2494883%22%2C%22ISSN%22%3A%221548-1603%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15481603.2025.2494883%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-06T14%3A57%3A07Z%22%7D%7D%2C%7B%22key%22%3A%22DCNEFVSV%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Arzoumanidis%20et%20al.%22%2C%22parsedDate%22%3A%222025-11-19%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BArzoumanidis%2C%20L.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2511.15875%26%23039%3B%26gt%3BAutomatic%20Uncertainty-Aware%20Synthetic%20Data%20Bootstrapping%20for%20Historical%20Map%20Segmentation%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Automatic%20Uncertainty-Aware%20Synthetic%20Data%20Bootstrapping%20for%20Historical%20Map%20Segmentation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lukas%22%2C%22lastName%22%3A%22Arzoumanidis%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Julius%22%2C%22lastName%22%3A%22Knechtel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jan-Henrik%22%2C%22lastName%22%3A%22Haunert%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Youness%22%2C%22lastName%22%3A%22Dehbi%22%7D%5D%2C%22abstractNote%22%3A%22The%20automated%20analysis%20of%20historical%20documents%2C%20particularly%20maps%2C%20has%20drastically%20benefited%20from%20advances%20in%20deep%20learning%20and%20its%20success%20across%20various%20computer%20vision%20applications.%20However%2C%20most%20deep%20learning-based%20methods%20heavily%20rely%20on%20large%20amounts%20of%20annotated%20training%20data%2C%20which%20are%20typically%20unavailable%20for%20historical%20maps%2C%20especially%20for%20those%20belonging%20to%20specific%2C%20homogeneous%20cartographic%20domains%2C%20also%20known%20as%20corpora.%20Creating%20high-quality%20training%20data%20suitable%20for%20machine%20learning%20often%20takes%20a%20significant%20amount%20of%20time%20and%20involves%20extensive%20manual%20effort.%20While%20synthetic%20training%20data%20can%20alleviate%20the%20scarcity%20of%20real-world%20samples%2C%20it%20often%20lacks%20the%20affinity%20%28realism%29%20and%20diversity%20%28variation%29%20necessary%20for%20effective%20learning.%20By%20transferring%20the%20cartographic%20style%20of%20an%20original%20historical%20map%20corpus%20onto%20vector%20data%2C%20we%20bootstrap%20an%20effectively%20unlimited%20number%20of%20synthetic%20historical%20maps%20suitable%20for%20tasks%20such%20as%20land-cover%20interpretation%20of%20a%20homogeneous%20historical%20map%20corpus.%20We%20propose%20an%20automatic%20deep%20generative%20approach%20and%20a%20alternative%20manual%20stochastic%20degradation%20technique%20to%20emulate%20the%20visual%20uncertainty%20and%20noise%2C%20also%20known%20as%20data-dependent%20uncertainty%2C%20commonly%20observed%20in%20historical%20map%20scans.%20To%20quantitatively%20evaluate%20the%20effectiveness%20and%20applicability%20of%20our%20approach%2C%20the%20generated%20training%20datasets%20were%20employed%20for%20domain-adaptive%20semantic%20segmentation%20on%20a%20homogeneous%20map%20corpus%20using%20a%20Self-Constructing%20Graph%20Convolutional%20Network%2C%20enabling%20a%20comprehensive%20assessment%20of%20the%20impact%20of%20our%20data%20bootstrapping%20methods.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2511.15875%22%2C%22date%22%3A%222025-11-19%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2511.15875%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2511.15875%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-24T18%3A45%3A22Z%22%7D%7D%2C%7B%22key%22%3A%223LHV9S4U%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Xia%20et%20al.%22%2C%22parsedDate%22%3A%222025-10-31%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BXia%2C%20X.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2510.27547%26%23039%3B%26gt%3BMapSAM2%3A%20Adapting%20SAM2%20for%20Automatic%20Segmentation%20of%20Historical%20Map%20Images%20and%20Time%20Series%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22MapSAM2%3A%20Adapting%20SAM2%20for%20Automatic%20Segmentation%20of%20Historical%20Map%20Images%20and%20Time%20Series%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xue%22%2C%22lastName%22%3A%22Xia%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Randall%22%2C%22lastName%22%3A%22Balestriero%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tao%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yixin%22%2C%22lastName%22%3A%22Zhou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Andrew%22%2C%22lastName%22%3A%22Ding%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dev%22%2C%22lastName%22%3A%22Saini%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20maps%20are%20unique%20and%20valuable%20archives%20that%20document%20geographic%20features%20across%20different%20time%20periods.%20However%2C%20automated%20analysis%20of%20historical%20map%20images%20remains%20a%20significant%20challenge%20due%20to%20their%20wide%20stylistic%20variability%20and%20the%20scarcity%20of%20annotated%20training%20data.%20Constructing%20linked%20spatio-temporal%20datasets%20from%20historical%20map%20time%20series%20is%20even%20more%20time-consuming%20and%20labor-intensive%2C%20as%20it%20requires%20synthesizing%20information%20from%20multiple%20maps.%20Such%20datasets%20are%20essential%20for%20applications%20such%20as%20dating%20buildings%2C%20analyzing%20the%20development%20of%20road%20networks%20and%20settlements%2C%20studying%20environmental%20changes%20etc.%20We%20present%20MapSAM2%2C%20a%20unified%20framework%20for%20automatically%20segmenting%20both%20historical%20map%20images%20and%20time%20series.%20Built%20on%20a%20visual%20foundation%20model%2C%20MapSAM2%20adapts%20to%20diverse%20segmentation%20tasks%20with%20few-shot%20fine-tuning.%20Our%20key%20innovation%20is%20to%20treat%20both%20historical%20map%20images%20and%20time%20series%20as%20videos.%20For%20images%2C%20we%20process%20a%20set%20of%20tiles%20as%20a%20video%2C%20enabling%20the%20memory%20attention%20mechanism%20to%20incorporate%20contextual%20cues%20from%20similar%20tiles%2C%20leading%20to%20improved%20geometric%20accuracy%2C%20particularly%20for%20areal%20features.%20For%20time%20series%2C%20we%20introduce%20the%20annotated%20Siegfried%20Building%20Time%20Series%20Dataset%20and%2C%20to%20reduce%20annotation%20costs%2C%20propose%20generating%20pseudo%20time%20series%20from%20single-year%20maps%20by%20simulating%20common%20temporal%20transformations.%20Experimental%20results%20show%20that%20MapSAM2%20learns%20temporal%20associations%20effectively%20and%20can%20accurately%20segment%20and%20link%20buildings%20in%20time%20series%20under%20limited%20supervision%20or%20using%20pseudo%20videos.%20We%20will%20release%20both%20our%20dataset%20and%20code%20to%20support%20future%20research.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2510.27547%22%2C%22date%22%3A%222025-10-31%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2510.27547%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2510.27547%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-06T14%3A58%3A08Z%22%7D%7D%2C%7B%22key%22%3A%22622W5RNG%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22McCarthy%22%2C%22parsedDate%22%3A%222025-08-05%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BMcCarthy%2C%20A.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2508.03564%26%23039%3B%26gt%3BA%20Scalable%20Machine%20Learning%20Pipeline%20for%20Building%20Footprint%20Detection%20in%20Historical%20Maps%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22A%20Scalable%20Machine%20Learning%20Pipeline%20for%20Building%20Footprint%20Detection%20in%20Historical%20Maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Annemarie%22%2C%22lastName%22%3A%22McCarthy%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20maps%20offer%20a%20valuable%20lens%20through%20which%20to%20study%20past%20landscapes%20and%20settlement%20patterns.%20While%20prior%20research%20has%20leveraged%20machine%20learning%20based%20techniques%20to%20extract%20building%20footprints%20from%20historical%20maps%2C%20such%20approaches%20have%20largely%20focused%20on%20urban%20areas%20and%20tend%20to%20be%20computationally%20intensive.%20This%20presents%20a%20challenge%20for%20research%20questions%20requiring%20analysis%20across%20extensive%20rural%20regions%2C%20such%20as%20verifying%20historical%20census%20data%20or%20locating%20abandoned%20settlements.%20In%20this%20paper%2C%20this%20limitation%20is%20addressed%20by%20proposing%20a%20scalable%20and%20efficient%20pipeline%20tailored%20to%20rural%20maps%20with%20sparse%20building%20distributions.%20The%20method%20described%20employs%20a%20hierarchical%20machine%20learning%20based%20approach%3A%20convolutional%20neural%20network%20%28CNN%29%20classifiers%20are%20first%20used%20to%20progressively%20filter%20out%20map%20sections%20unlikely%20to%20contain%20buildings%2C%20significantly%20reducing%20the%20area%20requiring%20detailed%20analysis.%20The%20remaining%20high%20probability%20sections%20are%20then%20processed%20using%20CNN%20segmentation%20algorithms%20to%20extract%20building%20features.%20The%20pipeline%20is%20validated%20using%20test%20sections%20from%20the%20Ordnance%20Survey%20Ireland%20historical%2025%20inch%20map%20series%20and%206%20inch%20map%20series%2C%20demonstrating%20both%20high%20performance%20and%20improved%20efficiency%20compared%20to%20conventional%20segmentation-only%20approaches.%20Application%20of%20the%20technique%20to%20both%20map%20series%2C%20covering%20the%20same%20geographic%20region%2C%20highlights%20its%20potential%20for%20historical%20and%20archaeological%20discovery.%20Notably%2C%20the%20pipeline%20identified%20a%20settlement%20of%20approximately%2022%20buildings%20in%20Tully%2C%20Co.%20Galway%2C%20present%20in%20the%206%20inch%20map%2C%20produced%20in%201839%2C%20but%20absent%20from%20the%2025%20inch%20map%2C%20produced%20in%201899%2C%20suggesting%20it%20may%20have%20been%20abandoned%20during%20the%20Great%20Famine%20period.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2508.03564%22%2C%22date%22%3A%222025-08-05%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2508.03564%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2508.03564%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-10T20%3A18%3A02Z%22%7D%7D%2C%7B%22key%22%3A%224SB5PMF8%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wang%20et%20al.%22%2C%22parsedDate%22%3A%222025-08-03%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWang%2C%20J.-H.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2025.2473568%26%23039%3B%26gt%3BUnsupervised%20domain%20adaptation%20for%20cross-style%2C%20cross-year%20land%20use%20understanding%20from%20historical%20maps%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Unsupervised%20domain%20adaptation%20for%20cross-style%2C%20cross-year%20land%20use%20understanding%20from%20historical%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jun-Hua%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Andy%20Da-Yu%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hsiung-Ming%22%2C%22lastName%22%3A%22Liao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ming-Ching%22%2C%22lastName%22%3A%22Chang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Richard%20Tzong-Han%22%2C%22lastName%22%3A%22Tsai%22%7D%5D%2C%22abstractNote%22%3A%22Digitizing%20historical%20topographic%20maps%20is%20essential%20for%20spatial%20analysis%20in%20GIS%3B%20however%2C%20conventional%20methods%20for%20digitizing%20these%20maps%20are%20labor-intensive%20and%20challenging%20due%20to%20non-explicit%20boundaries%20and%20inconsistent%20map%20styles.%20We%20address%20these%20challenges%20by%20proposing%20a%20new%20Map%20Style%20Segmentation%20%28MapStyleSeg%29%20method%20that%20employs%20unsupervised%20domain%20adaptation%20%28UDA%29%20from%20deep%20learning%20%28DL%29%20to%20enhance%20cross-style%2C%20cross-year%20automatic%20map%20segmentation%20and%20conversion.%20Our%20method%2C%20MapStyleSeg%2C%20is%20exemplified%20by%20training%20on%20a%20fully%20annotated%20topographic%20map%20of%20Taiwan%20in%202017%20and%20applying%20it%20to%20a%202001%20topographic%20map%20without%20annotations.%20We%20also%20evaluated%20different%20encoder-decoder%20architectures%20and%20loss%20functions.%20Our%20results%20show%20that%20using%20the%20ResNet-101%20backbone%20with%20the%20SegFormer%20decoder%20and%20a%20mix%20of%20focal%20and%20Dice%20loss%20yields%20the%20best%20performance%3A%2094.94%25%20overall%20accuracy%20%28Acc%29%2C%2081.8%25%20mean%20Intersection%20over%20Union%20%28mIoU%29%2C%20outperforming%20standard%20U-Net%20models%20without%20UDA%20%2888.23%25%20Acc%2C%2049.3%25%20mIoU%29.%20Our%20approach%20addresses%20the%20challenges%20of%20digitizing%20historical%20maps%20with%20varying%20styles%2C%20further%20advancing%20GIS%20digitization%20of%20historical%20maps%2C%20and%20offering%20useful%20information%20for%20urban%20planning%2C%20environmental%20monitoring%2C%20and%20decision-making%20processes.%20This%20work%20highlights%20the%20novel%20use%20of%20DL%20algorithms%20to%20automate%20complex%20GIS%20data%20processing%20that%20transforms%20historical%20maps%20into%20spatial%20datasets.%22%2C%22date%22%3A%222025-08-03%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2025.2473568%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2025.2473568%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-29T20%3A33%3A58Z%22%7D%7D%2C%7B%22key%22%3A%223AEBC7WA%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Duan%20et%20al.%22%2C%22parsedDate%22%3A%222025-06-19%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BDuan%2C%20W.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2506.16006%26%23039%3B%26gt%3BDIGMAPPER%3A%20A%20Modular%20System%20for%20Automated%20Geologic%20Map%20Digitization%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22DIGMAPPER%3A%20A%20Modular%20System%20for%20Automated%20Geologic%20Map%20Digitization%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weiwei%22%2C%22lastName%22%3A%22Duan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%20P.%22%2C%22lastName%22%3A%22Gerlek%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Steven%20N.%22%2C%22lastName%22%3A%22Minton%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Craig%20A.%22%2C%22lastName%22%3A%22Knoblock%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fandel%22%2C%22lastName%22%3A%22Lin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Theresa%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Leeje%22%2C%22lastName%22%3A%22Jang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sofia%22%2C%22lastName%22%3A%22Kirsanova%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zekun%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yijun%22%2C%22lastName%22%3A%22Lin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yao-Yi%22%2C%22lastName%22%3A%22Chiang%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20geologic%20maps%20contain%20rich%20geospatial%20information%2C%20such%20as%20rock%20units%2C%20faults%2C%20folds%2C%20and%20bedding%20planes%2C%20that%20is%20critical%20for%20assessing%20mineral%20resources%20essential%20to%20renewable%20energy%2C%20electric%20vehicles%2C%20and%20national%20security.%20However%2C%20digitizing%20maps%20remains%20a%20labor-intensive%20and%20time-consuming%20task.%20We%20present%20DIGMAPPER%2C%20a%20modular%2C%20scalable%20system%20developed%20in%20collaboration%20with%20the%20United%20States%20Geological%20Survey%20%28USGS%29%20to%20automate%20the%20digitization%20of%20geologic%20maps.%20DIGMAPPER%20features%20a%20fully%20dockerized%2C%20workflow-orchestrated%20architecture%20that%20integrates%20state-of-the-art%20deep%20learning%20models%20for%20map%20layout%20analysis%2C%20feature%20extraction%2C%20and%20georeferencing.%20To%20overcome%20challenges%20such%20as%20limited%20training%20data%20and%20complex%20visual%20content%2C%20our%20system%20employs%20innovative%20techniques%2C%20including%20in-context%20learning%20with%20large%20language%20models%2C%20synthetic%20data%20generation%2C%20and%20transformer-based%20models.%20Evaluations%20on%20over%20100%20annotated%20maps%20from%20the%20DARPA-USGS%20dataset%20demonstrate%20high%20accuracy%20across%20polygon%2C%20line%2C%20and%20point%20feature%20extraction%2C%20and%20reliable%20georeferencing%20performance.%20Deployed%20at%20USGS%2C%20DIGMAPPER%20significantly%20accelerates%20the%20creation%20of%20analysis-ready%20geospatial%20datasets%2C%20supporting%20national-scale%20critical%20mineral%20assessments%20and%20broader%20geoscientific%20applications.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2506.16006%22%2C%22date%22%3A%222025-06-19%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2506.16006%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2506.16006%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-10T19%3A42%3A19Z%22%7D%7D%2C%7B%22key%22%3A%2255893HNY%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yuan%20and%20Sester%22%2C%22parsedDate%22%3A%222025-06-09%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYuan%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fagile-giss.copernicus.org%5C%2Farticles%5C%2F6%5C%2F52%5C%2F2025%5C%2F%26%23039%3B%26gt%3BLeveraging%20LLMs%20and%20attention-mechanism%20for%20automatic%20annotation%20of%20historical%20maps%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Leveraging%20LLMs%20and%20attention-mechanism%20for%20automatic%20annotation%20of%20historical%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yunshuang%22%2C%22lastName%22%3A%22Yuan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Monika%22%2C%22lastName%22%3A%22Sester%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20maps%20are%20essential%20resources%20that%20provide%20insights%20into%20the%20geographical%20landscapes%20of%20the%20past.%20They%20serve%20as%20valuable%20tools%20for%20researchers%20across%20disciplines%20such%20as%20history%2C%20geography%2C%20and%20urban%20studies%2C%20facilitating%20the%20reconstruction%20of%20historical%20environments%20and%20the%20analysis%20of%20spatial%20transformations%20over%20time.%20However%2C%20when%20constrained%20to%20analogue%20or%20scanned%20formats%2C%20their%20interpretation%20is%20limited%20to%20humans%20and%20therefore%20not%20scalable.%20Recent%20advancements%20in%20machine%20learning%2C%20particularly%20in%20computer%20vision%20and%20large%20language%20models%20%28LLMs%29%2C%20have%20opened%20new%20avenues%20for%20automating%20the%20recognition%20and%20classification%20of%20features%20and%20objects%20in%20historical%20maps.%20In%20this%20paper%2C%20we%20propose%20a%20novel%20distillation%20method%20that%20leverages%20LLMs%20and%20attention%20mechanisms%20for%20the%20automatic%20annotation%20of%20historical%20maps.%20LLMs%20are%20employed%20to%20generate%20coarse%20classification%20labels%20for%20low-resolution%20historical%20image%20patches%2C%20while%20attention%20mechanisms%20are%20utilized%20to%20refine%20these%20labels%20to%20higher%20resolutions.%20Experimental%20results%20demonstrate%20that%20the%20refined%20labels%20achieve%20a%20high%20recall%20of%20more%20than%2090%25.%20Additionally%2C%20the%20intersection%20over%20union%20%28IoU%29%20scores%5Cu201484.2%25%20for%20Wood%20and%2072.0%25%20for%20Settlement%5Cu2014%20along%20with%20precision%20scores%20of%2087.1%25%20and%2079.5%25%2C%20respectively%2C%20indicate%20that%20most%20labels%20are%20well-aligned%20with%20ground-truth%20annotations.%20Notably%2C%20these%20results%20were%20achieved%20without%20the%20use%20of%20fine-grained%20manual%20labels%20during%20training%2C%20underscoring%20the%20potential%20of%20our%20approach%20for%20efficient%20and%20scalable%20historical%20map%20analysis.%22%2C%22date%22%3A%222025-06-09%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.5194%5C%2Fagile-giss-6-52-2025%22%2C%22ISSN%22%3A%222700-8150%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fagile-giss.copernicus.org%5C%2Farticles%5C%2F6%5C%2F52%5C%2F2025%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-12T12%3A12%3A19Z%22%7D%7D%2C%7B%22key%22%3A%22Y7FRDRV4%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22L%5Cu00f3pez-Rauhut%20et%20al.%22%2C%22parsedDate%22%3A%222025-05-30%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BL%5Cu00f3pez-Rauhut%2C%20M.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2505.24824%26%23039%3B%26gt%3BSegmenting%20France%20Across%20Four%20Centuries%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Segmenting%20France%20Across%20Four%20Centuries%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Marta%22%2C%22lastName%22%3A%22L%5Cu00f3pez-Rauhut%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hongyu%22%2C%22lastName%22%3A%22Zhou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mathieu%22%2C%22lastName%22%3A%22Aubry%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Loic%22%2C%22lastName%22%3A%22Landrieu%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20maps%20offer%20an%20invaluable%20perspective%20into%20territory%20evolution%20across%20past%20centuries--long%20before%20satellite%20or%20remote%20sensing%20technologies%20existed.%20Deep%20learning%20methods%20have%20shown%20promising%20results%20in%20segmenting%20historical%20maps%2C%20but%20publicly%20available%20datasets%20typically%20focus%20on%20a%20single%20map%20type%20or%20period%2C%20require%20extensive%20and%20costly%20annotations%2C%20and%20are%20not%20suited%20for%20nationwide%2C%20long-term%20analyses.%20In%20this%20paper%2C%20we%20introduce%20a%20new%20dataset%20of%20historical%20maps%20tailored%20for%20analyzing%20large-scale%2C%20long-term%20land%20use%20and%20land%20cover%20evolution%20with%20limited%20annotations.%20Spanning%20metropolitan%20France%20%28548%2C305%20km%5E2%29%2C%20our%20dataset%20contains%20three%20map%20collections%20from%20the%2018th%2C%2019th%2C%20and%2020th%20centuries.%20We%20provide%20both%20comprehensive%20modern%20labels%20and%2022%2C878%20km%5E2%20of%20manually%20annotated%20historical%20labels%20for%20the%2018th%20and%2019th%20century%20maps.%20Our%20dataset%20illustrates%20the%20complexity%20of%20the%20segmentation%20task%2C%20featuring%20stylistic%20inconsistencies%2C%20interpretive%20ambiguities%2C%20and%20significant%20landscape%20changes%20%28e.g.%2C%20marshlands%20disappearing%20in%20favor%20of%20forests%29.%20We%20assess%20the%20difficulty%20of%20these%20challenges%20by%20benchmarking%20three%20approaches%3A%20a%20fully-supervised%20model%20trained%20with%20historical%20labels%2C%20and%20two%20weakly-supervised%20models%20that%20rely%20only%20on%20modern%20annotations.%20The%20latter%20either%20use%20the%20modern%20labels%20directly%20or%20first%20perform%20image-to-image%20translation%20to%20address%20the%20stylistic%20gap%20between%20historical%20and%20contemporary%20maps.%20Finally%2C%20we%20discuss%20how%20these%20methods%20can%20support%20long-term%20environment%20monitoring%2C%20offering%20insights%20into%20centuries%20of%20landscape%20transformation.%20Our%20official%20project%20repository%20is%20publicly%20available%20at%20https%3A%5C%2F%5C%2Fgithub.com%5C%2FArchiel19%5C%2FFRAx4.git.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2505.24824%22%2C%22date%22%3A%222025-05-30%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2505.24824%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2505.24824%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-19T12%3A29%3A31Z%22%7D%7D%2C%7B%22key%22%3A%22DHDAQB34%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Levin%20et%20al.%22%2C%22parsedDate%22%3A%222025-01-25%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLevin%2C%20G.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Flink.springer.com%5C%2F10.1007%5C%2Fs10661-025-13634-1%26%23039%3B%26gt%3BAssessing%20spatially%20explicit%20long-term%20landscape%20dynamics%20based%20on%20automated%20production%20of%20land%20category%20layers%20from%20Danish%20late%20nineteenth-century%20topographic%20maps%20in%20comparison%20with%20contemporary%20maps%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Assessing%20spatially%20explicit%20long-term%20landscape%20dynamics%20based%20on%20automated%20production%20of%20land%20category%20layers%20from%20Danish%20late%20nineteenth-century%20topographic%20maps%20in%20comparison%20with%20contemporary%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gregor%22%2C%22lastName%22%3A%22Levin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Geoff%22%2C%22lastName%22%3A%22Groom%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stig%20Roar%22%2C%22lastName%22%3A%22Svenningsen%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20topographical%20maps%20contain%20valuable%2C%20spatially%20and%20thematically%20detailed%20information%20about%20past%20landscapes.%20Yet%2C%20for%20analyses%20of%20landscape%20dynamics%20through%20geographical%20information%20systems%2C%20it%20is%20necessary%20to%20%5Cu201cunlock%5Cu201d%20this%20information%20via%20map%20processing.%20For%20two%20study%20areas%20in%20northern%20and%20central%20Jutland%2C%20Denmark%2C%20we%20apply%20object-based%20image%20analysis%2C%20vector%20GIS%2C%20colour%20image%20segmentation%2C%20and%20machine%20learning%20processes%20to%20produce%20machine-readable%20layers%20for%20the%20land%20use%20and%20land%20cover%20categories%20forest%2C%20wetland%2C%20heath%2C%20dune%20sand%2C%20and%20water%20bodies%20from%20topographic%20maps%20from%20the%20late%20nineteenth%20century.%20Obtained%20overall%20accuracy%20was%2092.3%25.%20A%20comparison%20with%20a%20contemporary%20map%20revealed%20spatially%20explicit%20landscape%20dynamics%20dominated%20by%20transitions%20from%20heath%20and%20wetland%20to%20agriculture%20and%20forest%20and%20from%20heath%20and%20dune%20sand%20to%20forest.%20However%2C%20dune%20sand%20was%20also%20characterised%20by%20more%20complex%20transitions%20to%20heath%20and%20dry%20grassland%2C%20which%20can%20be%20related%20to%20active%20prevention%20of%20sand%20drift%20but%20that%20can%20also%20be%20biased%20by%20different%20categorisations%20of%20dune%20sand%20between%20the%20historical%20and%20contemporary%20data.%20We%20conclude%20that%20automated%20production%20of%20machine-readable%20layers%20of%20land%20use%20and%20land%20cover%20categories%20from%20historical%20topographical%20maps%20offers%20a%20resource-efficient%20alternative%20to%20manual%20vectorisation%20and%20is%20particularly%20useful%20for%20spatially%20explicit%20assessments%20of%20long-term%20landscape%20dynamics.%20Our%20results%20also%20underline%20that%20an%20understanding%20of%20mapped%20categories%20in%20both%20historical%20and%20contemporary%20maps%20is%20critical%20to%20the%20interpretation%20of%20landscape%20dynamics.%20%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%20%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%20%20Graphical%20abstract%22%2C%22date%22%3A%222025-01-25%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2Fs10661-025-13634-1%22%2C%22ISSN%22%3A%221573-2959%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Flink.springer.com%5C%2F10.1007%5C%2Fs10661-025-13634-1%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-12T12%3A19%3A17Z%22%7D%7D%2C%7B%22key%22%3A%22QT8FBENK%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yuan%20et%20al.%22%2C%22parsedDate%22%3A%222025%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYuan%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F23729333.2025.2545586%26%23039%3B%26gt%3BSemantic%20segmentation%20of%20time-series%20of%20historical%20maps%20by%20learning%20from%20only%20one%20map%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Semantic%20segmentation%20of%20time-series%20of%20historical%20maps%20by%20learning%20from%20only%20one%20map%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yunshuang%22%2C%22lastName%22%3A%22Yuan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Frank%22%2C%22lastName%22%3A%22Thiemann%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Thorsten%22%2C%22lastName%22%3A%22Dahms%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Monika%22%2C%22lastName%22%3A%22Sester%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20maps%20are%20valuable%20resources%20that%20capture%20detailed%20geographical%20information%20from%20the%20past.%20However%2C%20these%20maps%20are%20typically%20available%20in%20printed%20or%20scanned%20formats%2C%20which%20is%20not%20suitable%20for%20automatic%20analyses.%20Digitizing%20these%20maps%20into%20a%20machine-readable%2C%20object%20based%2C%20format%20enables%20efficient%20computational%20analyses.%20In%20this%20paper%2C%20we%20propose%20an%20automated%20approach%20to%20digitization%20using%20deep-learning-based%20semantic%20segmentation%2C%20which%20assigns%20a%20semantic%20label%20to%20each%20pixel%20in%20scanned%20historical%20maps.%20A%20key%20challenge%20in%20this%20process%20is%20the%20lack%20of%20ground-truth%20annotations%20required%20for%20training%20deep%20neural%20networks%2C%20as%20manual%20labeling%20is%20time-consuming%20and%20labor-intensive.%20To%20address%20this%20issue%2C%20we%20introduce%20a%20weakly%20supervised%20age-tracing%20strategy%20for%20model%20fine-tuning.%20This%20approach%20exploits%20the%20similarity%20in%20land-use%20patterns%20and%20their%20appearance%20between%20historical%20maps%20from%20neighboring%20time%20periods%20to%20guide%20the%20training%20process.%20Specifically%2C%20model%20predictions%20for%20one%20map%20are%20utilized%20as%20pseudo-labels%20for%20training%20on%20maps%20from%20adjacent%20time%20periods.%20Experiments%20conducted%20on%20our%20newly%20curated%20Hameln%20dataset%20demonstrate%20that%20the%20proposed%20age-tracing%20strategy%20significantly%20enhances%20segmentation%20performance%20compared%20to%20baseline%20models.%20In%20the%20best-case%20scenario%2C%20the%20mean%20Intersection%20over%20Union%20%28mIoU%29%20reached%2077.3%25%2C%20reflecting%20an%20improvement%20of%20approximately%2020%25%20over%20baseline%20methods.%20Additionally%2C%20the%20model%20achieved%20an%20average%20overall%20accuracy%20of%2097%25%2C%20highlighting%20the%20effectiveness%20of%20our%20approach%20for%20digitizing%20historical%20maps.%22%2C%22date%22%3A%222025%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F23729333.2025.2545586%22%2C%22ISSN%22%3A%222372-9333%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F23729333.2025.2545586%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-21T14%3A08%3A50Z%22%7D%7D%2C%7B%22key%22%3A%228B45Q99T%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Vu%20et%20al.%22%2C%22parsedDate%22%3A%222025%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BVu%2C%20T.%20et%20al.%20Advancing%20Geopolitical%20Map%20Analysis%3A%20An%20Intelligent%20System%20for%20Territorial%20Integrity%20Verification.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Advancing%20Geopolitical%20Map%20Analysis%3A%20An%20Intelligent%20System%20for%5Cu00a0Territorial%20Integrity%20Verification%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tung%22%2C%22lastName%22%3A%22Vu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hung%22%2C%22lastName%22%3A%22Nguyen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nam%22%2C%22lastName%22%3A%22Nguyen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cuong%22%2C%22lastName%22%3A%22Pham%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cong%22%2C%22lastName%22%3A%22Tran%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Wray%22%2C%22lastName%22%3A%22Buntine%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Morten%22%2C%22lastName%22%3A%22Fjeld%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Truyen%22%2C%22lastName%22%3A%22Tran%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Minh-Triet%22%2C%22lastName%22%3A%22Tran%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Binh%22%2C%22lastName%22%3A%22Huynh%20Thi%20Thanh%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Takumi%22%2C%22lastName%22%3A%22Miyoshi%22%7D%5D%2C%22abstractNote%22%3A%22Accurate%20cartographic%20representation%20of%20territorial%20sovereignty%20is%20crucial%20for%20geopolitical%20integrity%2C%20especially%20in%20historically%20complex%20regions.%20This%20paper%20presents%20an%20intelligent%20system%20for%20detecting%20island%20omissions%2C%20particularly%20the%20Hoang%20Sa%20and%20Truong%20Sa%20archipelagos%2C%20on%20geographic%20maps.%20Addressing%20data-scarcity%20and%20map-diversity%20challenges%2C%20we%20propose%20a%20multi-faceted%20deep%20learning-based%20approach%20integrating%20web-crawling%2C%20map-detection%2C%20and%20island-identification%20techniques.%20Additionally%2C%20we%20contribute%20a%20novel%20dataset%2C%20termed%20as%20IslandMapVN%2C%20comprising%203000%20annotated%20map-images%2C%20including%201200%20Vietnam%20maps%20featuring%20the%20disputed%20archipelagos%20and%20600%20rare%20nine-dash-line%20images.%20Experimental%20results%20exhibit%20the%20superiority%20of%20the%20proposed%20method%20in%20comparison%20with%20competitive%20baselines%2C%20in%20terms%20of%20both%20quantitative%20and%20qualitative%20manners.%22%2C%22date%22%3A%222025%22%2C%22proceedingsTitle%22%3A%22Information%20and%20Communication%20Technology%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2F978-981-96-4285-4_6%22%2C%22ISBN%22%3A%22978-981-96-4285-4%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-981-96-4285-4_6%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-21T13%3A39%3A19Z%22%7D%7D%2C%7B%22key%22%3A%22AUIEJ3NZ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Arzoumanidis%20et%20al.%22%2C%22parsedDate%22%3A%222025%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BArzoumanidis%2C%20L.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2025.2468304%26%23039%3B%26gt%3BSemantic%20segmentation%20of%20historical%20maps%20using%20Self-Constructing%20Graph%20Convolutional%20Networks%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Semantic%20segmentation%20of%20historical%20maps%20using%20Self-Constructing%20Graph%20Convolutional%20Networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lukas%22%2C%22lastName%22%3A%22Arzoumanidis%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Julius%22%2C%22lastName%22%3A%22Knechtel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jan-Henrik%22%2C%22lastName%22%3A%22Haunert%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Youness%22%2C%22lastName%22%3A%22Dehbi%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20maps%20represent%20an%20invaluable%20memory%20which%20should%20be%20preserved.%20Such%20kind%20of%20maps%20are%2C%20however%2C%20mostly%20scanned%20and%20stored%20as%20raster%20graphics%20which%20do%20not%20contain%20semantic%20information%20in%20a%20machine-readable%20form.%20To%20achieve%20a%20machine-readable%20state%2C%20an%20often%20expensive%20human%20intervention%20is%20needed%20in%20a%20fully%20manual%20or%20semi-automatic%20fashion.%20An%20automatic%20interpretation%20and%20a%20feature%20extraction%20is%20then%20inevitable%20for%20a%20map%20digitization%20and%20vectorization.%20Automatic%20approaches%20showed%20more%20and%20more%20convincing%20and%20promising%20results%20on%20challenging%20map%20corpora%20avoiding%20human%20interaction.%20This%20paper%20deals%20with%20the%20semantic%20segmentation%20of%20historical%20maps%20based%20on%20Graph%20Convolutional%20Networks%20%28GCNs%29%20to%20capture%20long-range%20dependencies%20between%20image%20features.%20This%20allows%20for%20an%20extension%20of%20the%20receptive%20field%20of%20Convolutional%20Neural%20Networks%20%28CNNs%29%20restricted%20on%20local%20dependencies.%20A%20Self-Constructing%20Graph%20%28SCG%29%20module%20has%20been%20applied%20to%20automatically%20induce%20the%20structure%20of%20the%20GCN.%20We%20performed%20experiments%20revealing%20promising%20results%20where%20our%20approach%20achieved%20an%20Mean%20Intersection%20over%20Union%20%28mIoU%29%20of%200.68%2C%20outperforming%20a%20state-of-the-art%20CNN%20dedicated%20to%20the%20semantic%20segmentation%20of%20historical%20maps.%20Automatic%20semantic%20segmentation%20of%20heterogeneous%20historical%20map%20corporaCombination%20of%20Convolutional%20Neural%20Network%20%28CNN%29%20and%20Graph%20Convolutional%20Network%20%28GCN%29%20to%20capture%20long-range%20dependenciesAutomatic%20induction%20of%20the%20graph%20structure%20of%20GCN%20using%20Self-Constructing%20Graph%20moduleDemonstrated%20superiority%20over%20state-of-the-art%20CNN-based%20methods%20Automatic%20semantic%20segmentation%20of%20heterogeneous%20historical%20map%20corpora%20Combination%20of%20Convolutional%20Neural%20Network%20%28CNN%29%20and%20Graph%20Convolutional%20Network%20%28GCN%29%20to%20capture%20long-range%20dependencies%20Automatic%20induction%20of%20the%20graph%20structure%20of%20GCN%20using%20Self-Constructing%20Graph%20module%20Demonstrated%20superiority%20over%20state-of-the-art%20CNN-based%20methods%22%2C%22date%22%3A%222025%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2025.2468304%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2025.2468304%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-06-09T16%3A38%3A21Z%22%7D%7D%2C%7B%22key%22%3A%225TLLTYUZ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Saxton%20et%20al.%22%2C%22parsedDate%22%3A%222024-11%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSaxton%2C%20A.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2076-3263%5C%2F14%5C%2F11%5C%2F305%26%23039%3B%26gt%3BAccurate%20Feature%20Extraction%20from%20Historical%20Geologic%20Maps%20Using%20Open-Set%20Segmentation%20and%20Detection%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Accurate%20Feature%20Extraction%20from%20Historical%20Geologic%20Maps%20Using%20Open-Set%20Segmentation%20and%20Detection%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Aaron%22%2C%22lastName%22%3A%22Saxton%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiahua%22%2C%22lastName%22%3A%22Dong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Albert%22%2C%22lastName%22%3A%22Bode%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nattapon%22%2C%22lastName%22%3A%22Jaroenchai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rob%22%2C%22lastName%22%3A%22Kooper%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiyue%22%2C%22lastName%22%3A%22Zhu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dou%20Hoon%22%2C%22lastName%22%3A%22Kwark%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22William%22%2C%22lastName%22%3A%22Kramer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Volodymyr%22%2C%22lastName%22%3A%22Kindratenko%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shirui%22%2C%22lastName%22%3A%22Luo%22%7D%5D%2C%22abstractNote%22%3A%22This%20study%20presents%20a%20novel%20AI%20method%20for%20extracting%20polygon%20and%20point%20features%20from%20historical%20geologic%20maps%2C%20representing%20a%20pivotal%20step%20for%20assessing%20the%20mineral%20resources%20needed%20for%20energy%20transition.%20Our%20innovative%20method%20involves%20using%20map%20units%20in%20the%20legends%20as%20prompts%20for%20one-shot%20segmentation%20and%20detection%20in%20geological%20feature%20extraction.%20The%20model%2C%20integrated%20with%20a%20human-in-the-loop%20system%2C%20enables%20geologists%20to%20refine%20results%20efficiently%2C%20combining%20the%20power%20of%20AI%20with%20expert%20oversight.%20Tested%20on%20geologic%20maps%20annotated%20by%20USGS%20and%20DARPA%20for%20the%20AI4CMA%20DARPA%20Challenge%2C%20our%20approach%20achieved%20a%20median%20F1%20score%20of%200.91%20for%20polygon%20feature%20segmentation%20and%200.73%20for%20point%20feature%20detection%20when%20such%20features%20had%20abundant%20annotated%20data%2C%20outperforming%20current%20benchmarks.%20By%20efficiently%20and%20accurately%20digitizing%20historical%20geologic%20map%2C%20our%20method%20promises%20to%20provide%20crucial%20insights%20for%20responsible%20policymaking%20and%20effective%20resource%20management%20in%20the%20global%20energy%20transition.%22%2C%22date%22%3A%222024%5C%2F11%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fgeosciences14110305%22%2C%22ISSN%22%3A%222076-3263%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2076-3263%5C%2F14%5C%2F11%5C%2F305%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-01-08T11%3A24%3A05Z%22%7D%7D%2C%7B%22key%22%3A%22Y76U7EC9%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Du%20et%20al.%22%2C%22parsedDate%22%3A%222024-06%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BDu%2C%20K.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F13%5C%2F6%5C%2F216%26%23039%3B%26gt%3BIntegration%20of%20Spatial%20and%20Co-Existence%20Relationships%20to%20Improve%20Administrative%20Region%20Target%20Detection%20in%20Map%20Images%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Integration%20of%20Spatial%20and%20Co-Existence%20Relationships%20to%20Improve%20Administrative%20Region%20Target%20Detection%20in%20Map%20Images%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kaixuan%22%2C%22lastName%22%3A%22Du%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fu%22%2C%22lastName%22%3A%22Ren%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yong%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xianghong%22%2C%22lastName%22%3A%22Che%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiping%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiaxin%22%2C%22lastName%22%3A%22Hou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zewei%22%2C%22lastName%22%3A%22You%22%7D%5D%2C%22abstractNote%22%3A%22Administrative%20regions%20are%20fundamental%20geographic%20elements%20on%20maps%2C%20thus%20making%20their%20detection%20in%20map%20images%20crucial%20to%20enhancing%20intelligent%20map%20interpretation.%20However%2C%20existing%20methods%20in%20this%20field%20primarily%20depend%20on%20the%20texture%20features%20within%20the%20images%20and%20do%20not%20account%20for%20the%20influence%20of%20spatial%20and%20co-existence%20relationships%20among%20different%20targets.%20In%20this%20study%2C%20taking%20the%20administrative%20regions%20of%20the%20Chinese%20Mainland%2C%20Taiwan%2C%20Tibet%2C%20and%20Henan%20as%20test%20targets%2C%20we%20employed%20the%20spatial%20and%20co-existence%20relationships%20of%20pairs%20of%20targets%20to%20improve%20target%20detection%20performance.%20Firstly%2C%20these%20four%20regions%20were%20detected%20using%20a%20simple%20Single-Target%20Cascading%20detection%20model%20based%20on%20RetinaNet.%20Subsequently%2C%20the%20detection%20results%20were%20adjusted%20with%20the%20spatial%20and%20co-existence%20relationships%20of%20each%20pair%20of%20targets.%20The%20adjusted%20outcomes%20demonstrate%20a%20significant%20increase%20in%20target%20detection%20accuracy%2C%20as%20well%20as%20precision%20%28from%200.62%20to%200.96%29%20and%20F1%20score%20%28from%200.76%20to%200.88%29%2C%20for%20the%20Chinese%20Mainland%20target.%20This%20study%20contributes%20to%20the%20advancement%20of%20intelligent%20map%20interpretation.%22%2C%22date%22%3A%222024%5C%2F6%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi13060216%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F13%5C%2F6%5C%2F216%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-27T13%3A34%3A09Z%22%7D%7D%2C%7B%22key%22%3A%22AM22HQ3Z%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Xia%20et%20al.%22%2C%22parsedDate%22%3A%222024-05-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BXia%2C%20X.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1569843224001912%26%23039%3B%26gt%3BVectorizing%20historical%20maps%20with%20topological%20consistency%3A%20A%20hybrid%20approach%20using%20transformers%20and%20contour-based%20instance%20segmentation%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Vectorizing%20historical%20maps%20with%20topological%20consistency%3A%20A%20hybrid%20approach%20using%20transformers%20and%20contour-based%20instance%20segmentation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xue%22%2C%22lastName%22%3A%22Xia%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tao%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Magnus%22%2C%22lastName%22%3A%22Heitzler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22Reducing%20the%20complexity%20of%20the%20workflow%20for%20historical%20map%20vectorization%20is%20essential%20to%20promote%20the%20widespread%20utilization%20of%20historical%20spatial%20data.%20Traditional%20pixel-wise%20segmentation%20followed%20by%20vectorization%20workflows%20suffer%20from%20tedious%20post-processing%20steps.%20To%20address%20this%20challenge%2C%20we%20introduce%20an%20innovative%20pure%20vector-based%20workflow.%20This%20workflow%20predicts%20object%20contours%20in%20vector%20format%20by%20assembling%20geometric%20primitives%2C%20such%20as%20line%20segments%2C%20in%20the%20correct%20order%20to%20form%20closed%20polygons.%20Consequently%2C%20the%20need%20for%20additional%20post-processing%20steps%2C%20typically%20associated%20with%20raster-to-vector%20data%20conversion%2C%20is%20eliminated.%20Furthermore%2C%20we%20have%20curated%20a%20publicly%20available%20historical%20map%20dataset%20called%20Sanborn-Vector%2C%20which%20holds%20significant%20potential%20for%20future%20research%20on%20vector-based%20historical%20map%20processing%20methods.%20To%20address%20the%20lack%20of%20suitable%20evaluation%20metrics%20for%20vector-based%20techniques%2C%20we%20have%20introduced%20a%20novel%20metric%20called%20structural%20panoptic%20quality%20%28sPQ%29.%20This%20metric%20takes%20into%20account%20both%20the%20shape%20and%20positional%20accuracy%20of%20the%20vector%20output.%20Applying%20our%20proposed%20workflow%20to%20detect%20building%20instances%20from%20Sanborn%20maps%20has%20yielded%20simplified%20and%20intersection-free%20polygonal%20representations.%20We%20believe%20that%20our%20proposed%20workflow%20offers%20a%20fresh%20perspective%20on%20vectorizing%20historical%20maps%2C%20opening%20up%20new%20possibilities%20in%20this%20field.%22%2C%22date%22%3A%222024-05-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.jag.2024.103837%22%2C%22ISSN%22%3A%221569-8432%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1569843224001912%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-04-25T13%3A26%3A25Z%22%7D%7D%2C%7B%22key%22%3A%22NE2XFP66%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Chen%20et%20al.%22%2C%22parsedDate%22%3A%222024-02-15%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BChen%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fjournals.plos.org%5C%2Fplosone%5C%2Farticle%3Fid%3D10.1371%5C%2Fjournal.pone.0298217%26%23039%3B%26gt%3BAutomatic%20vectorization%20of%20historical%20maps%3A%20A%20benchmark%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Automatic%20vectorization%20of%20historical%20maps%3A%20A%20benchmark%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yizi%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Joseph%22%2C%22lastName%22%3A%22Chazalon%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Edwin%22%2C%22lastName%22%3A%22Carlinet%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Minh%20%5Cu00d4n%20V%5Cu0169%22%2C%22lastName%22%3A%22Ngoc%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cl%5Cu00e9ment%22%2C%22lastName%22%3A%22Mallet%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Julien%22%2C%22lastName%22%3A%22Perret%22%7D%5D%2C%22abstractNote%22%3A%22Shape%20vectorization%20is%20a%20key%20stage%20of%20the%20digitization%20of%20large-scale%20historical%20maps%2C%20especially%20city%20maps%20that%20exhibit%20complex%20and%20valuable%20details.%20Having%20access%20to%20digitized%20buildings%2C%20building%20blocks%2C%20street%20networks%20and%20other%20geographic%20content%20opens%20numerous%20new%20approaches%20for%20historical%20studies%20such%20as%20change%20tracking%2C%20morphological%20analysis%20and%20density%20estimations.%20In%20the%20context%20of%20the%20digitization%20of%20Paris%20atlases%20created%20in%20the%2019th%20and%20early%2020th%20centuries%2C%20we%20have%20designed%20a%20supervised%20pipeline%20that%20reliably%20extract%20closed%20shapes%20from%20historical%20maps.%20This%20pipeline%20is%20based%20on%20a%20supervised%20edge%20filtering%20stage%20using%20deep%20filters%2C%20and%20a%20closed%20shape%20extraction%20stage%20using%20a%20watershed%20transform.%20It%20relies%20on%20probable%20multiple%20suboptimal%20methodological%20choices%20that%20hamper%20the%20vectorization%20performances%20in%20terms%20of%20accuracy%20and%20completeness.%20Objectively%20investigating%20which%20solutions%20are%20the%20most%20adequate%20among%20the%20numerous%20possibilities%20is%20comprehensively%20addressed%20in%20this%20paper.%20The%20following%20contributions%20are%20subsequently%20introduced%3A%20%28i%29%20we%20propose%20an%20improved%20training%20protocol%20for%20map%20digitization%3B%20%28ii%29%20we%20introduce%20a%20joint%20optimization%20of%20the%20edge%20detection%20and%20shape%20extraction%20stages%3B%20%28iii%29%20we%20compare%20the%20performance%20of%20state-of-the-art%20deep%20edge%20filters%20with%20topology-preserving%20loss%20functions%2C%20including%20vision%20transformers%3B%20%28iv%29%20we%20evaluate%20the%20end-to-end%20deep%20learnable%20watershed%20against%20Meyer%20watershed.%20We%20subsequently%20design%20the%20critical%20path%20for%20a%20fully%20automatic%20extraction%20of%20key%20elements%20of%20historical%20maps.%20All%20the%20data%2C%20code%2C%20benchmark%20results%20are%20freely%20available%20at%20https%3A%5C%2F%5C%2Fgithub.com%5C%2Fsoduco%5C%2FBenchmark_historical_map_vectorization.%22%2C%22date%22%3A%2215.02.2024%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1371%5C%2Fjournal.pone.0298217%22%2C%22ISSN%22%3A%221932-6203%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fjournals.plos.org%5C%2Fplosone%5C%2Farticle%3Fid%3D10.1371%5C%2Fjournal.pone.0298217%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-12T22%3A33%3A39Z%22%7D%7D%2C%7B%22key%22%3A%226YQ8UQN9%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Duan%22%2C%22parsedDate%22%3A%222023-11-13%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BDuan%2C%20W.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3589132.3628371%26%23039%3B%26gt%3BEfficient%20and%20Accurate%20Object%20Extraction%20from%20Scanned%20Maps%20by%20Leveraging%20External%20Data%20and%20Learning%20Representative%20Context%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Efficient%20and%20Accurate%20Object%20Extraction%20from%20Scanned%20Maps%20by%20Leveraging%20External%20Data%20and%20Learning%20Representative%20Context%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weiwei%22%2C%22lastName%22%3A%22Duan%22%7D%5D%2C%22abstractNote%22%3A%22This%20paper%20introduces%20innovative%20approaches%20for%20extracting%20geographic%20objects%20from%20scanned%20map%20images.%20Overcoming%20the%20challenges%20associated%20with%20labor-intensive%20data%20labeling%20and%20inaccurate%20extraction%20results%2C%20we%20present%20two%20data%20labeling%20methods%20and%20an%20accurate-enhancing%20extraction%20method.%20The%20experiment%20in%20this%20paper%20shows%20that%20our%20approaches%20outperform%20the%20baselines.%22%2C%22date%22%3A%222023-11-13%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%2031st%20ACM%20International%20Conference%20on%20Advances%20in%20Geographic%20Information%20Systems%22%2C%22conferenceName%22%3A%22SIGSPATIAL%20%2723%3A%2031st%20ACM%20International%20Conference%20on%20Advances%20in%20Geographic%20Information%20Systems%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1145%5C%2F3589132.3628371%22%2C%22ISBN%22%3A%22979-8-4007-0168-9%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3589132.3628371%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-12T12%3A14%3A27Z%22%7D%7D%2C%7B%22key%22%3A%22ZJRQTZR9%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22M%5Cu00e4yr%5Cu00e4%20et%20al.%22%2C%22parsedDate%22%3A%222023-11-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BM%5Cu00e4yr%5Cu00e4%2C%20J.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs13280-023-01838-z%26%23039%3B%26gt%3BUtilizing%20historical%20maps%20in%20identification%20of%20long-term%20land%20use%20and%20land%20cover%20changes%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Utilizing%20historical%20maps%20in%20identification%20of%20long-term%20land%20use%20and%20land%20cover%20changes%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Janne%22%2C%22lastName%22%3A%22M%5Cu00e4yr%5Cu00e4%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sonja%22%2C%22lastName%22%3A%22Kivinen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sarita%22%2C%22lastName%22%3A%22Keski-Saari%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Poikolainen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Timo%22%2C%22lastName%22%3A%22Kumpula%22%7D%5D%2C%22abstractNote%22%3A%22Knowledge%20in%20the%20magnitude%20and%20historical%20trends%20in%20land%20use%20and%20land%20cover%20%28LULC%29%20is%20needed%20to%20understand%20the%20changing%20status%20of%20the%20key%20elements%20of%20the%20landscape%20and%20to%20better%20target%20management%20efforts.%20However%2C%20this%20information%20is%20not%20easily%20available%20before%20the%20start%20of%20satellite%20campaign%20missions.%20Scanned%20historical%20maps%20are%20a%20valuable%20but%20underused%20source%20of%20LULC%20information.%20As%20a%20case%20study%2C%20we%20used%20U-Net%20to%20automatically%20extract%20fields%2C%20mires%2C%20roads%2C%20watercourses%2C%20and%20water%20bodies%20from%20scanned%20historical%20maps%2C%20dated%201965%2C%201984%20and%201985%20for%20our%20900%5Cu00a0km%5Cu00b2%20study%20area%20in%20Southern%20Finland.%20We%20then%20used%20these%20data%2C%20along%20with%20the%20topographic%20databases%20from%202005%20and%202022%2C%20to%20quantify%20the%20LULC%20changes%20for%20the%20past%2057%20years.%20For%20example%2C%20the%20total%20area%20of%20fields%20decreased%20by%20around%2027%5Cu00a0km%5Cu00b2%2C%20and%20the%20total%20length%20of%20watercourses%20increased%20by%20around%202250%5Cu00a0km%20in%20our%20study%20area.%22%2C%22date%22%3A%222023-11-01%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2Fs13280-023-01838-z%22%2C%22ISSN%22%3A%221654-7209%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs13280-023-01838-z%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-12T12%3A20%3A42Z%22%7D%7D%2C%7B%22key%22%3A%22A67KRRSU%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22%5Cu0160anca%20et%20al.%22%2C%22parsedDate%22%3A%222023-06-22%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3B%5Cu0160anca%2C%20S.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fisprs-archives.copernicus.org%5C%2Farticles%5C%2FXLVIII-4-W7-2023%5C%2F169%5C%2F2023%5C%2F%26%23039%3B%26gt%3BAN%20END-TO-END%20DEEP%20LEARNING%20WORKFLOW%20FOR%20BUILDING%20SEGMENTATION%2C%20BOUNDARY%20REGULARIZATION%20AND%20VECTORIZATION%20OF%20BUILDING%20FOOTPRINTS%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22AN%20END-TO-END%20DEEP%20LEARNING%20WORKFLOW%20FOR%20BUILDING%20SEGMENTATION%2C%20BOUNDARY%20REGULARIZATION%20AND%20VECTORIZATION%20OF%20BUILDING%20FOOTPRINTS%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22S.%22%2C%22lastName%22%3A%22%5Cu0160anca%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22S.%22%2C%22lastName%22%3A%22Jyhne%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22M.%22%2C%22lastName%22%3A%22Gazzea%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22R.%22%2C%22lastName%22%3A%22Arghandeh%22%7D%5D%2C%22abstractNote%22%3A%22Automatic%20building%20footprint%20extraction%20from%20remote%20sensing%20imagery%20is%20a%20widely%20used%20method%2C%20with%20deep%20learning%20techniques%20being%20particularly%20effective.%20However%2C%20deep%20learning%20approaches%20still%20require%20additional%20post-processing%20steps%20due%20to%20pixel-wise%20predictions%2C%20that%20contribute%20to%20occluded%20and%20geometrically%20incorrectly%20segmented%20buildings.%20To%20address%20this%20issue%2C%20we%20propose%20an%20end-to-end%20workflow%20that%20utilizes%20binary%20semantic%20segmentation%2C%20regularization%2C%20and%20vectorization.%20We%20implement%20and%20assess%20the%20performance%20of%20four%20convolutional%20neural%20network%20architectures%20including%20U-Net%2C%20U-NetFormer%2C%20FT-UnetFormer%2C%20and%20DCSwin%20on%20the%20MapAI%20Precision%20in%20Building%20Segmentation%20competition.%20To%20additionally%20improve%20the%20shape%20of%20the%20predicted%20buildings%20we%20apply%20regularization%20on%20the%20predictions%20to%20assess%20whether%20regularization%20further%20improves%20the%20geometrical%20shape%20and%20improve%20the%20prediction%20accuracy.%20We%20aim%20to%20produce%20accurate%20predictions%20with%20regularized%20boundaries%20that%20can%20prove%20useful%20in%20many%20cartographic%20and%20engineering%20applications.%20The%20regularization%20and%20vectorization%20workflow%20is%20further%20developed%20into%20a%20working%20QGIS-plugin%20that%20can%20be%20used%20to%20extend%20the%20functionality%20of%20QGIS.%20Our%20aim%20is%20to%20provide%20an%20end-to-end%20workflow%20for%20building%20segmentation%2C%20regularization%20and%20vectorization.%22%2C%22date%22%3A%222023-06-22%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.5194%5C%2Fisprs-archives-XLVIII-4-W7-2023-169-2023%22%2C%22ISSN%22%3A%221682-1750%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fisprs-archives.copernicus.org%5C%2Farticles%5C%2FXLVIII-4-W7-2023%5C%2F169%5C%2F2023%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-11-15T18%3A36%3A15Z%22%7D%7D%2C%7B%22key%22%3A%22MF6UKZWP%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wu%20et%20al.%22%2C%22parsedDate%22%3A%222023-03-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWu%2C%20S.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0924271623000278%26%23039%3B%26gt%3BDomain%20adaptation%20in%20segmenting%20historical%20maps%3A%20A%20weakly%20supervised%20approach%20through%20spatial%20co-occurrence%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Domain%20adaptation%20in%20segmenting%20historical%20maps%3A%20A%20weakly%20supervised%20approach%20through%20spatial%20co-occurrence%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sidi%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Konrad%22%2C%22lastName%22%3A%22Schindler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Magnus%22%2C%22lastName%22%3A%22Heitzler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20maps%20depict%20past%20states%20of%20the%20Earth%5Cu2019s%20surface%20and%20make%20it%20possible%20to%20trace%20the%20natural%20or%20anthropogenic%20evolution%20of%20geographic%20objects%20back%20through%20time.%20However%2C%20the%20state%20of%20the%20depicted%20reality%20is%20not%20the%20only%20source%20of%20change%3A%20maps%20of%20varying%20age%20can%20differ%20in%20terms%20of%20graphical%20design%2C%20and%20also%20in%20terms%20of%20storage%20conditions%2C%20physical%20ageing%20of%20pigments%2C%20and%20the%20scanning%20process%20for%20digitization.%20Consequently%2C%20a%20computer%20vision%20system%20learned%20from%20a%20specific%20%28source%29%20map%20series%20will%20often%20not%20generalize%20well%20to%20older%20or%20newer%20%28target%29%20maps%2C%20calling%20for%20domain%20adaptation.%20In%20the%20present%20paper%20we%20examine%20%5Cu2013%20to%20our%20knowledge%20for%20the%20first%20time%20%5Cu2013%20domain%20adaptation%20for%20segmenting%20historical%20maps.%20We%20argue%20that%20for%20geo-spatial%20data%20like%20maps%2C%20which%20are%20geo-localized%20by%20definition%2C%20the%20spatial%20co-occurrence%20of%20geographical%20objects%20provides%20a%20supervision%20signal%20for%20domain%20adaptation.%20Since%20only%20a%20subset%20of%20all%20mapped%20objects%20co-occur%2C%20and%20even%20those%20are%20not%20perfectly%20aligned%20due%20to%20both%20real%20topographic%20changes%20and%20variations%20in%20map%20generalization%5C%2Fproduction%2C%20they%20only%20provide%20weak%20supervision%20%5Cu2014%20still%20they%20can%20bring%20a%20substantial%20benefit%20over%20completely%20unsupervised%20domain%20adaptation%20methods.%20The%20core%20of%20our%20proposed%20method%20is%20a%20novel%20self-supervised%20co-occurrence%20network%20that%20detects%20co-occurring%20objects%20across%20maps%20%28specifically%2C%20domains%29%20with%20a%20novel%20loss%20function%20that%20allows%20for%20object%20changes%20and%20spatial%20misalignment.%20Experiments%20show%20that%2C%20for%20the%20task%20of%20segmenting%20hydrological%20objects%20such%20as%20rivers%2C%20lakes%20and%20wetlands%2C%20our%20system%20significantly%20outperforms%20two%20state-of-art%20baselines%2C%20even%20with%20limited%20supervision%20%28e.g.%2C%205%25%29.%20The%20source%20code%20is%20publicly%20available%20at%20https%3A%5C%2F%5C%2Fgithub.com%5C%2Fsian-wusidi%5C%2Fspatialcooccurrence.%22%2C%22date%22%3A%222023-03-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.isprsjprs.2023.01.021%22%2C%22ISSN%22%3A%220924-2716%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0924271623000278%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-28T18%3A41%3A19Z%22%7D%7D%2C%7B%22key%22%3A%22KRAZZE6H%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wu%20et%20al.%22%2C%22parsedDate%22%3A%222023%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWu%2C%20S.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3589132.3625572%26%23039%3B%26gt%3BCross-attention%20Spatio-temporal%20Context%20Transformer%20for%20Semantic%20Segmentation%20of%20Historical%20Maps%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Cross-attention%20Spatio-temporal%20Context%20Transformer%20for%20Semantic%20Segmentation%20of%20Historical%20Maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sidi%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yizi%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Konrad%22%2C%22lastName%22%3A%22Schindler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20maps%20provide%20useful%20spatio-temporal%20information%20on%20the%20Earth%26%23039%3Bs%20surface%20before%20modern%20earth%20observation%20techniques%20came%20into%20being.%20To%20extract%20information%20from%20maps%2C%20neural%20networks%2C%20which%20gain%20wide%20popularity%20in%20recent%20years%2C%20have%20replaced%20hand-crafted%20map%20processing%20methods%20and%20tedious%20manual%20labor.%20However%2C%20aleatoric%20uncertainty%2C%20known%20as%20data-dependent%20uncertainty%2C%20inherent%20in%20the%20drawing%5C%2Fscanning%5C%2Ffading%20defects%20of%20the%20original%20map%20sheets%20and%20inadequate%20contexts%20when%20cropping%20maps%20into%20small%20tiles%20considering%20the%20memory%20limits%20of%20the%20training%20process%2C%20challenges%20the%20model%20to%20make%20correct%20predictions.%20As%20aleatoric%20uncertainty%20cannot%20be%20reduced%20even%20with%20more%20training%20data%20collected%2C%20we%20argue%20that%20complementary%20spatio-temporal%20contexts%20can%20be%20helpful.%20To%20achieve%20this%2C%20we%20propose%20a%20U-Net-based%20network%20that%20fuses%20spatio-temporal%20features%20with%20cross-attention%20transformers%20%28U-SpaTem%29%2C%20aggregating%20information%20at%20a%20larger%20spatial%20range%20as%20well%20as%20through%20a%20temporal%20sequence%20of%20images.%20Our%20model%20achieves%20a%20better%20performance%20than%20other%20state-or-art%20models%20that%20use%20either%20temporal%20or%20spatial%20contexts.%20Compared%20with%20pure%20vision%20transformers%2C%20our%20model%20is%20more%20lightweight%20and%20effective.%20To%20the%20best%20of%20our%20knowledge%2C%20leveraging%20both%20spatial%20and%20temporal%20contexts%20have%20been%20rarely%20explored%20before%20in%20the%20segmentation%20task.%20Even%20though%20our%20application%20is%20on%20segmenting%20historical%20maps%2C%20we%20believe%20that%20the%20method%20can%20be%20transferred%20into%20other%20fields%20with%20similar%20problems%20like%20temporal%20sequences%20of%20satellite%20images.%20Our%20code%20is%20freely%20accessible%20at%20https%3A%5C%2F%5C%2Fgithub.com%5C%2Fchenyizi086%5C%2Fwu.2023.sigspatial.git.%22%2C%22date%22%3A%22Dezember%2022%2C%202023%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%2031st%20ACM%20International%20Conference%20on%20Advances%20in%20Geographic%20Information%20Systems%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3589132.3625572%22%2C%22ISBN%22%3A%22979-8-4007-0168-9%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3589132.3625572%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-28T16%3A59%3A27Z%22%7D%7D%2C%7B%22key%22%3A%22NYRE23QJ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Baloun%20et%20al.%22%2C%22parsedDate%22%3A%222023%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBaloun%2C%20J.%20et%20al.%20FCN-Boosted%20Historical%20Map%20Segmentation%20with%20Little%20Training%20Data.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22FCN-Boosted%20Historical%20Map%20Segmentation%20with%5Cu00a0Little%20Training%20Data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Josef%22%2C%22lastName%22%3A%22Baloun%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ladislav%22%2C%22lastName%22%3A%22Lenc%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pavel%22%2C%22lastName%22%3A%22Kr%5Cu00e1l%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Gernot%20A.%22%2C%22lastName%22%3A%22Fink%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Rajiv%22%2C%22lastName%22%3A%22Jain%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Koichi%22%2C%22lastName%22%3A%22Kise%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Richard%22%2C%22lastName%22%3A%22Zanibbi%22%7D%5D%2C%22abstractNote%22%3A%22This%20paper%20deals%20with%20automatic%20image%20segmentation%20in%20poorly%20resourced%20areas.%20We%20concentrate%20on%20map%20content%20segmentation%20in%20historical%20maps%20as%20an%20example%20of%20such%20a%20domain.%20In%20such%20cases%2C%20conventional%20computer%20vision%20%28CV%29%20approaches%20fail%20in%20unexpected%20unique%20regions%20such%20as%20map%20content%20area%20exceeding%20the%20map%20frame%2C%20while%20deep%20learning%20methods%20lack%20boundary%20localization%20accuracy.%20Therefore%2C%20we%20propose%20an%20efficient%20approach%20that%20combines%20conventional%20CV%20techniques%20with%20deep%20learning%20and%20practically%20eliminates%20their%20drawbacks.%20To%20do%20so%2C%20we%20redefine%20the%20learning%20objective%20of%20a%20simple%20fully%20convolutional%20network%20to%20make%20the%20training%20easier%20and%20the%20model%20more%20robust%20even%20with%20few%20training%20samples.%20The%20presented%20method%20provides%20excellent%20results%20compared%20to%20more%20sophisticated%20but%20solely%20deep%20learning%20or%20traditional%20computer%20vision%20techniques%20as%20shown%20in%20%5Cu201cMapSeg%5Cu201d%20segmentation%20competition%2C%20where%20all%20other%20approaches%20were%20significantly%20outperformed.%20We%20further%20propose%20two%20additional%20approaches%20that%20improve%20the%20original%20method%20and%20set%20a%20new%20state-of-the-art%20result%20on%20the%20MapSeg%20dataset.%20The%20methods%20are%20further%20tested%20on%20an%20extended%20version%20of%20the%20Map%20Border%20dataset%20to%20show%20their%20robustness.%22%2C%22date%22%3A%222023%22%2C%22proceedingsTitle%22%3A%22Document%20Analysis%20and%20Recognition%20-%20ICDAR%202023%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2F978-3-031-41676-7_30%22%2C%22ISBN%22%3A%22978-3-031-41676-7%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-031-41676-7_30%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-21T13%3A40%3A38Z%22%7D%7D%2C%7B%22key%22%3A%22SNVKQBNR%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Lenc%20et%20al.%22%2C%22parsedDate%22%3A%222023%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLenc%2C%20L.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-031-34111-3_16%26%23039%3B%26gt%3BTowards%20Historical%20Map%20Analysis%20Using%20Deep%20Learning%20Techniques%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Towards%20Historical%20Map%20Analysis%20Using%20Deep%20Learning%20Techniques%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ladislav%22%2C%22lastName%22%3A%22Lenc%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Josef%22%2C%22lastName%22%3A%22Baloun%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ji%5Cu0159%5Cu00ed%22%2C%22lastName%22%3A%22Mart%5Cu00ednek%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pavel%22%2C%22lastName%22%3A%22Kr%5Cu00e1l%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Ilias%22%2C%22lastName%22%3A%22Maglogiannis%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Lazaros%22%2C%22lastName%22%3A%22Iliadis%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22John%22%2C%22lastName%22%3A%22MacIntyre%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Manuel%22%2C%22lastName%22%3A%22Dominguez%22%7D%5D%2C%22abstractNote%22%3A%22This%20paper%20presents%20methods%20for%20automatic%20analysis%20of%20historical%20cadastral%20maps.%20The%20methods%20are%20developed%20as%20a%20part%20of%20a%20complex%20system%20for%20map%20digitisation%2C%20analysis%20and%20processing.%20Our%20goal%20is%20to%20detect%20important%20features%20in%20individual%20map%20sheets%20to%20allow%20their%20further%20processing%20and%20connecting%20the%20sheets%20into%20one%20seamless%20map%20that%20can%20be%20better%20presented%20online.%20We%20concentrate%20on%20detection%20of%20the%20map%20frame%2C%20which%20defines%20the%20important%20segment%20of%20the%20map%20sheet.%20Other%20crucial%20features%20are%20so-called%20inches%20that%20define%20the%20measuring%20scale%20of%20the%20map.%20We%20also%20detect%20the%20actual%20map%20area.%22%2C%22date%22%3A%222023%22%2C%22proceedingsTitle%22%3A%22Artificial%20Intelligence%20%20Applications%20%20and%20Innovations%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2F978-3-031-34111-3_16%22%2C%22ISBN%22%3A%22978-3-031-34111-3%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-031-34111-3_16%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-21T13%3A40%3A04Z%22%7D%7D%2C%7B%22key%22%3A%22N6VAUMPU%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Luo%20et%20al.%22%2C%22parsedDate%22%3A%222023%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLuo%2C%20S.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F10236599%26%23039%3B%26gt%3BCritical%20Minerals%20Map%20Feature%20Extraction%20Using%20Deep%20Learning%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Critical%20Minerals%20Map%20Feature%20Extraction%20Using%20Deep%20Learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shirui%22%2C%22lastName%22%3A%22Luo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Aaron%22%2C%22lastName%22%3A%22Saxton%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Albert%22%2C%22lastName%22%3A%22Bode%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Priyam%22%2C%22lastName%22%3A%22Mazumdar%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Volodymyr%22%2C%22lastName%22%3A%22Kindratenko%22%7D%5D%2C%22abstractNote%22%3A%22Critical%20minerals%20play%20a%20significant%20role%20in%20various%20areas%20such%20as%20national%20security%2C%20economic%20growth%2C%20renewable%20energy%20development%2C%20and%20infrastructure.%20The%20assessment%20of%20critical%20minerals%20requires%20examining%20historical%20scanned%20maps.%20The%20traditional%20processes%20of%20analyzing%20these%20scanned%20maps%20are%20labor-intensive%2C%20time-consuming%2C%20and%20prone%20to%20errors.%20In%20this%20study%2C%20we%20introduce%20a%20deep%20learning%20technique%20to%20help%20assess%20critical%20minerals%20by%20automatically%20extracting%20digital%20features%20from%20scanned%20maps.%20Polygon%20feature%20extraction%20is%20essential%20for%20evaluating%20the%20concentration%20and%20abundance%20of%20critical%20minerals.%20The%20extracted%20polygon%20features%20can%20be%20used%20to%20update%20existing%20geospatial%20databases%2C%20conduct%20further%20analysis%2C%20and%20support%20decision-making%20processes.%20The%20proposed%20U-Net%20model%20takes%20a%20six-channel%20array%20as%20input%2C%20where%20the%20legend%20feature%20is%20concatenated%20with%20the%20map%20image%20and%20serves%20as%20a%20prompt%2C%20and%20the%20model%20can%20generate%20image%20segmentation%20based%20on%20arbitrary%20prompts%20at%20test%20time.%20Our%20study%20shows%20that%20the%20modified%20U-Net%20model%20can%20effectively%20extract%20the%20mining-related%20polygon%20regions%20based%20on%20features%20listed%20in%20legends%20from%20historic%20topographic%20maps.%20The%20model%20achieves%20a%20median%20F1-score%20of%200.67.%20This%20study%20has%20the%20potential%20to%20significantly%20reduce%20the%20time%20and%20effort%20involved%20in%20manually%20digitizing%20geospatial%20data%20from%20historical%20topographic%20maps%2C%20thus%20streamlining%20the%20overall%20assessment%20process.%22%2C%22date%22%3A%222023%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FLGRS.2023.3310915%22%2C%22ISSN%22%3A%221558-0571%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F10236599%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-28T18%3A47%3A41Z%22%7D%7D%2C%7B%22key%22%3A%22B8N6BMUS%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhao%20et%20al.%22%2C%22parsedDate%22%3A%222022-11%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZhao%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F11%5C%2F572%26%23039%3B%26gt%3BBuilding%20Block%20Extraction%20from%20Historical%20Maps%20Using%20Deep%20Object%20Attention%20Networks%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Building%20Block%20Extraction%20from%20Historical%20Maps%20Using%20Deep%20Object%20Attention%20Networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yao%22%2C%22lastName%22%3A%22Zhao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guangxia%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jian%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lantian%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiaofei%22%2C%22lastName%22%3A%22Qi%22%7D%5D%2C%22abstractNote%22%3A%22The%20geographical%20feature%20extraction%20of%20historical%20maps%20is%20an%20important%20foundation%20for%20realizing%20the%20transition%20from%20human%20map%20reading%20to%20machine%20map%20reading.%20The%20current%20methods%20for%20building%20block%20extraction%20from%20historical%20maps%20have%20many%20problems%2C%20such%20as%20low%20accuracy%20and%20poor%20scalability.%20Moreover%2C%20the%20high%20cost%20of%20annotating%20historical%20maps%20further%20limits%20its%20applications.%20In%20this%20study%2C%20a%20method%20for%20extracting%20building%20blocks%20from%20historical%20maps%20is%20proposed%20based%20on%20the%20deep%20object%20attention%20network.%20Based%20on%20the%20OCRNet%20framework%2C%20multiple%20attention%20mechanisms%20were%20used%20to%20improve%20the%20ability%20of%20the%20network%20to%20extract%20the%20contextual%20information%20of%20the%20target.%20Moreover%2C%20through%20the%20optimization%20of%20the%20feature%20extraction%20network%20structure%2C%20the%20impact%20of%20the%20down-sampling%20process%20on%20local%20information%20and%20boundary%20contours%20was%20reduced%2C%20in%20order%20to%20improve%20the%20network%5Cu2019s%20ability%20to%20capture%20boundary%20information.%20Subsequently%2C%20the%20transfer%20learning%20method%20was%20used%20to%20jointly%20train%20the%20network%20model%20on%20both%20remote%20sensing%20datasets%20and%20few-shot%20historical%20map%20datasets%20to%20further%20improve%20the%20feature%20learning%20ability%20of%20the%20network%2C%20which%20overcomes%20the%20constraints%20of%20small%20sample%20sizes.%20The%20experimental%20results%20show%20that%20the%20proposed%20method%20can%20effectively%20improve%20the%20extraction%20accuracy%20of%20building%20blocks%20from%20historical%20maps.%22%2C%22date%22%3A%222022%5C%2F11%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi11110572%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F11%5C%2F572%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-06T11%3A17%3A26Z%22%7D%7D%2C%7B%22key%22%3A%22GXVJH7RC%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Xydas%20et%20al.%22%2C%22parsedDate%22%3A%222022-10-19%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BXydas%2C%20C.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.scitepress.org%5C%2FLink.aspx%3Fdoi%3D10.5220%5C%2F0010839700003124%26%23039%3B%26gt%3BBuildings%20Extraction%20from%20Historical%20Topographic%20Maps%20via%20a%20Deep%20Convolution%20Neural%20Network%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Buildings%20Extraction%20from%20Historical%20Topographic%20Maps%20via%20a%20Deep%20Convolution%20Neural%20Network%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Christos%22%2C%22lastName%22%3A%22Xydas%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Anastasios%22%2C%22lastName%22%3A%22Kesidis%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kleomenis%22%2C%22lastName%22%3A%22Kalogeropoulos%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Andreas%22%2C%22lastName%22%3A%22Tsatsaris%22%7D%5D%2C%22abstractNote%22%3A%22The%20cartographic%20representation%20is%20static%20by%20definition.%20Therefore%2C%20reading%20a%20map%20of%20the%20past%20can%20provide%20information%2C%20which%20corresponds%20to%20the%20accuracy%2C%20technology%2C%20as%20well%20as%20scientific%20knowledge%20of%20the%20time%20of%20their%20creation.%20Digital%20technology%20enables%20the%20current%20researcher%20to%20%26quot%3Bcopy%26quot%3B%20a%20historical%20map%20and%20%26quot%3Btranscribe%26quot%3B%20it%20to%20today.%20In%20this%20way%2C%20a%20cartographic%20reduction%20from%20the%20past%20to%20the%20present%20is%20possible%2C%20with%20parallel%20visualization%20of%20new%20information%20%28historical%20geodata%29%2C%20which%20the%20researcher%20has%20at%20his%20disposal%2C%20in%20addition%20to%20the%20background.%20In%20this%20work%20a%20deep%20learning%20approach%20is%20presented%20for%20the%20extraction%20of%20buildings%20within%20historical%20topographic%20maps.%20A%20deep%20convolution%20neural%20network%20based%20on%20the%20U-Net%20architecture%20is%20trained%20by%20a%20large%20number%20of%20images%20patches%20in%20a%20deep%20image-to-image%20regression%20mode%20in%20order%20to%20effectively%20isolate%20the%20buildings%20from%20the%20topographic%20map%20while%20ignoring%20other%20surrounding%20or%20overlapping%20information%20like%20texts%20or%20other%20irrelevant%20geosp%20atial%20features.%20Several%20experimental%20scenarios%20on%20a%20historical%20census%20topographic%20map%20investigate%20the%20applicability%20of%20the%20method%20under%20various%20patch%20sizes%20as%20well%20as%20patch%20sampling%20methods.%20The%20so%20far%20results%20show%20that%20the%20proposed%20method%20delivers%20promising%20outcomes%20in%20terms%20of%20building%20detection%20accuracy.%22%2C%22date%22%3A%222022-10-19%22%2C%22proceedingsTitle%22%3A%22%22%2C%22conferenceName%22%3A%2217th%20International%20Conference%20on%20Computer%20Vision%20Theory%20and%20Applications%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.5220%5C%2F0010839700003124%22%2C%22ISBN%22%3A%22978-989-758-555-5%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.scitepress.org%5C%2FLink.aspx%3Fdoi%3D10.5220%5C%2F0010839700003124%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-07-31T16%3A35%3A04Z%22%7D%7D%2C%7B%22key%22%3A%22PBQSRV5B%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Farmakis-Serebryakova%20et%20al.%22%2C%22parsedDate%22%3A%222022-07%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BFarmakis-Serebryakova%2C%20M.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F7%5C%2F395%26%23039%3B%26gt%3BTerrain%20Segmentation%20Using%20a%20U-Net%20for%20Improved%20Relief%20Shading%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Terrain%20Segmentation%20Using%20a%20U-Net%20for%20Improved%20Relief%20Shading%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Marianna%22%2C%22lastName%22%3A%22Farmakis-Serebryakova%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Magnus%22%2C%22lastName%22%3A%22Heitzler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22Since%20landforms%20composing%20land%20surface%20vary%20in%20their%20properties%20and%20appearance%2C%20their%20shaded%20reliefs%20also%20present%20different%20visual%20impression%20of%20the%20terrain.%20In%20this%20work%2C%20we%20adapt%20a%20U-Net%20so%20that%20it%20can%20recognize%20a%20selection%20of%20landforms%20and%20can%20segment%20terrain.%20We%20test%20the%20efficiency%20of%2010%20separate%20models%20and%20apply%20an%20ensemble%20approach%2C%20where%20all%20the%20models%20are%20combined%20to%20potentially%20outperform%20single%20models.%20Our%20algorithm%20works%20particularly%20well%20for%20block%20mountains%2C%20Prealps%2C%20valleys%2C%20and%20hills%2C%20delivering%20average%20precision%20and%20f1%20values%20above%2060%25.%20Segmenting%20plateaus%20and%20folded%20mountains%20is%20more%20challenging%2C%20and%20their%20precision%20values%20are%20rather%20scattered%20due%20to%20smaller%20areas%20available%20for%20training.%20Mountains%20formed%20by%20erosion%20processes%20are%20the%20least%20recognized%20landform%20of%20all%20because%20of%20their%20similarities%20with%20other%20landforms.%20The%20highest%20accuracy%20of%20one%20of%20the%2010%20models%20is%2065%25%2C%20while%20the%20accuracy%20of%20the%20ensemble%20is%2061%25.%20We%20apply%20relief%20shading%20techniques%20that%20were%20found%20to%20be%20efficient%20regarding%20specific%20landforms%20within%20corresponding%20segmented%20areas%20and%20blend%20them%20together.%20Finally%2C%20we%20test%20the%20trained%20model%20with%20the%20best%20accuracy%20on%20other%20mountainous%20areas%20around%20the%20world%2C%20and%20it%20proves%20to%20work%20in%20other%20regions%20beyond%20the%20training%20area.%22%2C%22date%22%3A%222022%5C%2F7%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi11070395%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F7%5C%2F395%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A55%3A01Z%22%7D%7D%2C%7B%22key%22%3A%22DAGFT8K6%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Du%20et%20al.%22%2C%22parsedDate%22%3A%222022-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BDu%2C%20K.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F1424-8220%5C%2F22%5C%2F19%5C%2F7594%26%23039%3B%26gt%3BComparison%20of%20RetinaNet-Based%20Single-Target%20Cascading%20and%20Multi-Target%20Detection%20Models%20for%20Administrative%20Regions%20in%20Network%20Map%20Pictures%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Comparison%20of%20RetinaNet-Based%20Single-Target%20Cascading%20and%20Multi-Target%20Detection%20Models%20for%20Administrative%20Regions%20in%20Network%20Map%20Pictures%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kaixuan%22%2C%22lastName%22%3A%22Du%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xianghong%22%2C%22lastName%22%3A%22Che%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yong%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiping%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22An%22%2C%22lastName%22%3A%22Luo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ruiyuan%22%2C%22lastName%22%3A%22Ma%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shenghua%22%2C%22lastName%22%3A%22Xu%22%7D%5D%2C%22abstractNote%22%3A%22There%20is%20a%20critical%20need%20for%20detection%20of%20administrative%20regions%20through%20network%20map%20pictures%20in%20map%20censorship%20tasks%2C%20which%20can%20be%20implemented%20by%20target%20detection%20technology.%20However%2C%20on%20map%20images%20there%20tend%20to%20be%20numerous%20administrative%20regions%20overlaying%20map%20annotations%20and%20symbols%2C%20thus%20making%20it%20difficult%20to%20accurately%20detect%20each%20region.%20Using%20a%20RetinaNet-based%20target%20detection%20model%20integrating%20ResNet50%20and%20a%20feature%20pyramid%20network%20%28FPN%29%2C%20this%20study%20built%20a%20multi-target%20model%20and%20a%20single-target%20cascading%20model%20from%20three%20single-target%20models%20by%20taking%20Taiwan%2C%20Tibet%2C%20and%20the%20Chinese%20mainland%20as%20target%20examples.%20Two%20models%20were%20evaluated%20both%20in%20classification%20and%20localization%20accuracy%20to%20investigate%20their%20administrative%20region%20detection%20performance.%20The%20results%20show%20that%20the%20single-target%20cascading%20model%20was%20able%20to%20detect%20more%20administrative%20regions%2C%20with%20a%20higher%20f1_score%20of%200.86%20and%20mAP%20of%200.85%20compared%20to%20the%20multi-target%20model%20%280.56%20and%200.52%2C%20respectively%29.%20Furthermore%2C%20location%20box%20size%20distribution%20from%20the%20single-target%20cascading%20model%20looks%20more%20similar%20to%20that%20of%20manually%20annotated%20box%20sizes%2C%20which%20signifies%20that%20the%20proposed%20cascading%20model%20is%20superior%20to%20the%20multi-target%20model.%20This%20study%20is%20promising%20in%20providing%20support%20for%20computer%20map%20reading%20and%20intelligent%20map%20censorship.%22%2C%22date%22%3A%222022%5C%2F1%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fs22197594%22%2C%22ISSN%22%3A%221424-8220%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F1424-8220%5C%2F22%5C%2F19%5C%2F7594%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A50%3A25Z%22%7D%7D%2C%7B%22key%22%3A%228TEGB7UN%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Schn%5Cu00fcrer%20et%20al.%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSchn%5Cu00fcrer%2C%20R.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F23729333.2021.1949087%26%23039%3B%26gt%3BInstance%20Segmentation%2C%20Body%20Part%20Parsing%2C%20and%20Pose%20Estimation%20of%20Human%20Figures%20in%20Pictorial%20Maps%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Instance%20Segmentation%2C%20Body%20Part%20Parsing%2C%20and%20Pose%20Estimation%20of%20Human%20Figures%20in%20Pictorial%20Maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Raimund%22%2C%22lastName%22%3A%22Schn%5Cu00fcrer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22A.%20Cengiz%22%2C%22lastName%22%3A%22%5Cu00d6ztireli%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Magnus%22%2C%22lastName%22%3A%22Heitzler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ren%5Cu00e9%22%2C%22lastName%22%3A%22Sieber%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22In%20recent%20years%2C%20convolutional%20neural%20networks%20%28CNNs%29%20have%20been%20applied%20successfully%20to%20recognise%20persons%2C%20their%20body%20parts%20and%20pose%20keypoints%20in%20photos%20and%20videos.%20The%20transfer%20of%20these%20techniques%20to%20artificially%20created%20images%20is%20rather%20unexplored%2C%20though%20challenging%20since%20these%20images%20are%20drawn%20in%20different%20styles%2C%20body%20proportions%2C%20and%20levels%20of%20abstraction.%20In%20this%20work%2C%20we%20study%20these%20problems%20on%20the%20basis%20of%20pictorial%20maps%20where%20we%20identify%20included%20human%20figures%20with%20two%20consecutive%20CNNs%3A%20We%20first%20segment%20individual%20figures%20with%20Mask%20R-CNN%2C%20and%20then%20parse%20their%20body%20parts%20and%20estimate%20their%20poses%20simultaneously%20with%20four%20different%20UNet%2B%2B%20versions.%20We%20train%20the%20CNNs%20with%20a%20mixture%20of%20real%20persons%20and%20synthetic%20figures%20and%20compare%20the%20results%20with%20manually%20annotated%20test%20datasets%20consisting%20of%20pictorial%20figures.%20By%20varying%20the%20training%20datasets%20and%20the%20CNN%20configurations%2C%20we%20were%20able%20to%20improve%20the%20original%20Mask%20R-CNN%20model%20and%20we%20achieved%20moderately%20satisfying%20results%20with%20the%20UNet%2B%2B%20versions.%20The%20extracted%20figures%20may%20be%20used%20for%20animation%20and%20storytelling%20and%20may%20be%20relevant%20for%20the%20analysis%20of%20historic%20and%20contemporary%20maps.%22%2C%22date%22%3A%222022%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F23729333.2021.1949087%22%2C%22ISSN%22%3A%222372-9333%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F23729333.2021.1949087%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A21%3A31Z%22%7D%7D%2C%7B%22key%22%3A%22SRS2PMJT%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Soliman%20et%20al.%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSoliman%2C%20A.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9780077%26%23039%3B%26gt%3BWeakly%20Supervised%20Segmentation%20of%20Buildings%20in%20Digital%20Elevation%20Models%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Weakly%20Supervised%20Segmentation%20of%20Buildings%20in%20Digital%20Elevation%20Models%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Aiman%22%2C%22lastName%22%3A%22Soliman%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yifan%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shirui%22%2C%22lastName%22%3A%22Luo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rauf%22%2C%22lastName%22%3A%22Makharov%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Volodymyr%22%2C%22lastName%22%3A%22Kindratenko%22%7D%5D%2C%22abstractNote%22%3A%22The%20lack%20of%20quality%20label%20data%20is%20considered%20one%20of%20the%20main%20bottlenecks%20for%20training%20machine%20and%20deep%20learning%20%28DL%29%20models.%20Weakly%20supervised%20learning%20using%20incomplete%2C%20coarse%2C%20or%20inaccurate%20data%20is%20an%20alternative%20strategy%20to%20overcome%20the%20scarcity%20of%20training%20data.%20We%20trained%20a%20U-Net%20model%20for%20segmenting%20buildings%5Cu2019%20footprints%20from%20a%20high-resolution%20digital%20elevation%20model%20%28DEM%29%2C%20using%20the%20existing%20label%20data%20from%20the%20open-access%20Microsoft%20building%20footprints%20%28MS-BF%29%20dataset.%20Comparison%20using%20an%20independent%2C%20manually%20labeled%20benchmark%20indicated%20the%20success%20of%20weak%20supervision%20learning%20as%20the%20quality%20of%20model%20prediction%20%5Bintersection%20over%20union%20%28IoU%29%3A%200.876%5D%20surpassed%20that%20of%20the%20original%20Microsoft%20data%20quality%20%28IoU%3A%200.672%29%20by%20approximately%2020%25.%20Moreover%2C%20adding%20extra%20channels%20such%20as%20elevation%20derivatives%2C%20slope%2C%20aspect%2C%20and%20profile%20curvatures%20did%20not%20enhance%20the%20weak%20learning%20process%20as%20the%20model%20learned%20directly%20from%20the%20original%20elevation%20data.%20Our%20results%20demonstrate%20the%20value%20of%20using%20existing%20data%20for%20training%20DL%20models%20even%20if%20they%20are%20noisy%20and%20incomplete.%22%2C%22date%22%3A%222022%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FLGRS.2022.3177160%22%2C%22ISSN%22%3A%221558-0571%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9780077%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A15%3A25Z%22%7D%7D%2C%7B%22key%22%3A%22LDDW4JVA%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wu%20et%20al.%22%2C%22parsedDate%22%3A%222021-12%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWu%2C%20J.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F10%5C%2F12%5C%2F831%26%23039%3B%26gt%3BAn%20Automatic%20Extraction%20Method%20for%20Hatched%20Residential%20Areas%20in%20Raster%20Maps%20Based%20on%20Multi-Scale%20Feature%20Fusion%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22An%20Automatic%20Extraction%20Method%20for%20Hatched%20Residential%20Areas%20in%20Raster%20Maps%20Based%20on%20Multi-Scale%20Feature%20Fusion%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jianhua%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiaqi%22%2C%22lastName%22%3A%22Xiong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yu%22%2C%22lastName%22%3A%22Zhao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiang%22%2C%22lastName%22%3A%22Hu%22%7D%5D%2C%22abstractNote%22%3A%22Extracting%20the%20residential%20areas%20from%20digital%20raster%20maps%20is%20beneficial%20for%20research%20on%20land%20use%20change%20analysis%20and%20land%20quality%20assessment.%20In%20traditional%20methods%20for%20extracting%20residential%20areas%20in%20raster%20maps%2C%20parameters%20must%20be%20set%20manually%3B%20these%20methods%20also%20suffer%20from%20low%20extraction%20accuracy%20and%20inefficiency.%20Therefore%2C%20we%20have%20proposed%20an%20automatic%20method%20for%20extracting%20the%20hatched%20residential%20areas%20from%20raster%20maps%20based%20on%20a%20multi-scale%20U-Net%20and%20fully%20connected%20conditional%20random%20fields.%20The%20experimental%20results%20showed%20that%20the%20model%20that%20was%20based%20on%20a%20multi-scale%20U-Net%20with%20fully%20connected%20conditional%20random%20fields%20achieved%20scores%20of%2097.05%25%20in%20Dice%2C%2094.26%25%20in%20Intersection%20over%20Union%2C%2094.92%25%20in%20recall%2C%2093.52%25%20in%20precision%20and%2099.52%25%20in%20accuracy.%20Compared%20to%20the%20FCN-8s%2C%20the%20five%20metrics%20increased%20by%201.47%25%2C%202.72%25%2C%201.07%25%2C%204.56%25%20and%200.26%25%2C%20respectively%20and%20compared%20to%20the%20U-Net%2C%20they%20increased%20by%200.84%25%2C%201.56%25%2C%203.00%25%2C%200.65%25%20and%200.13%25%2C%20respectively.%20Our%20method%20also%20outperformed%20the%20Gabor%20filter-based%20algorithm%20in%20the%20number%20of%20identified%20objects%20and%20the%20accuracy%20of%20object%20contour%20locations.%20Furthermore%2C%20we%20were%20able%20to%20extract%20all%20of%20the%20hatched%20residential%20areas%20from%20a%20sheet%20of%20raster%20map.%20These%20results%20demonstrate%20that%20our%20method%20has%20high%20accuracy%20in%20object%20recognition%20and%20contour%20position%2C%20thereby%20providing%20a%20new%20method%20with%20strong%20potential%20for%20the%20extraction%20of%20hatched%20residential%20areas.%22%2C%22date%22%3A%222021%5C%2F12%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi10120831%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F10%5C%2F12%5C%2F831%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A46%3A14Z%22%7D%7D%2C%7B%22key%22%3A%22XJM44F3C%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Schn%5Cu00fcrer%20et%20al.%22%2C%22parsedDate%22%3A%222021-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSchn%5Cu00fcrer%2C%20R.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F00087041.2020.1738112%26%23039%3B%26gt%3BDetection%20of%20Pictorial%20Map%20Objects%20with%20Convolutional%20Neural%20Networks%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Detection%20of%20Pictorial%20Map%20Objects%20with%20Convolutional%20Neural%20Networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Raimund%22%2C%22lastName%22%3A%22Schn%5Cu00fcrer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ren%5Cu00e9%22%2C%22lastName%22%3A%22Sieber%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jost%22%2C%22lastName%22%3A%22Schmid-Lanter%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22A.%20Cengiz%22%2C%22lastName%22%3A%22%5Cu00d6ztireli%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22In%20this%20work%2C%20realistically%20drawn%20objects%20are%20identified%20on%20digital%20maps%20by%20convolutional%20neural%20networks.%20For%20the%20first%20two%20experiments%2C%206200%20images%20were%20retrieved%20from%20Pinterest.%20While%20alternating%20image%20input%20options%2C%20two%20binary%20classifiers%20based%20on%20Xception%20and%20InceptionResNetV2%20were%20trained%20to%20separate%20maps%20and%20pictorial%20maps.%20Results%20showed%20that%20the%20accuracy%20is%2095%5Cu201397%25%20to%20distinguish%20maps%20from%20other%20images%2C%20whereas%20maps%20with%20pictorial%20objects%20are%20correctly%20classified%20at%20rates%20of%2087%5Cu201392%25.%20For%20a%20third%20experiment%2C%20bounding%20boxes%20of%203200%20sailing%20ships%20were%20annotated%20in%20historic%20maps%20from%20different%20digital%20libraries.%20Faster%20R-CNN%20and%20RetinaNet%20were%20compared%20to%20determine%20the%20box%20coordinates%2C%20while%20adjusting%20anchor%20scales%20and%20examining%20configurations%20for%20small%20objects.%20A%20resulting%20average%20precision%20of%2032%25%20was%20obtained%20for%20Faster%20R-CNN%20and%20of%2036%25%20for%20RetinaNet.%20Research%20outcomes%20are%20relevant%20for%20trawling%20map%20images%20on%20the%20Internet%20and%20for%20enhancing%20the%20advanced%20search%20of%20digital%20map%20catalogues.%22%2C%22date%22%3A%222021-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F00087041.2020.1738112%22%2C%22ISSN%22%3A%220008-7041%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F00087041.2020.1738112%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A38%3A03Z%22%7D%7D%2C%7B%22key%22%3A%22PC477AVJ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Garcia-Molsosa%20et%20al.%22%2C%22parsedDate%22%3A%222021%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGarcia-Molsosa%2C%20A.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1002%5C%2Farp.1807%26%23039%3B%26gt%3BPotential%20of%20deep%20learning%20segmentation%20for%20the%20extraction%20of%20archaeological%20features%20from%20historical%20map%20series%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Potential%20of%20deep%20learning%20segmentation%20for%20the%20extraction%20of%20archaeological%20features%20from%20historical%20map%20series%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Arnau%22%2C%22lastName%22%3A%22Garcia-Molsosa%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hector%20A.%22%2C%22lastName%22%3A%22Orengo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dan%22%2C%22lastName%22%3A%22Lawrence%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Graham%22%2C%22lastName%22%3A%22Philip%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kristen%22%2C%22lastName%22%3A%22Hopper%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cameron%20A.%22%2C%22lastName%22%3A%22Petrie%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20maps%20present%20a%20unique%20depiction%20of%20past%20landscapes%2C%20providing%20evidence%20for%20a%20wide%20range%20of%20information%20such%20as%20settlement%20distribution%2C%20past%20land%20use%2C%20natural%20resources%2C%20transport%20networks%2C%20toponymy%20and%20other%20natural%20and%20cultural%20data%20within%20an%20explicitly%20spatial%20context.%20Maps%20produced%20before%20the%20expansion%20of%20large-scale%20mechanized%20agriculture%20reflect%20a%20landscape%20that%20is%20lost%20today.%20Of%20particular%20interest%20to%20us%20is%20the%20great%20quantity%20of%20archaeologically%20relevant%20information%20that%20these%20maps%20recorded%2C%20both%20deliberately%20and%20incidentally.%20Despite%20the%20importance%20of%20the%20information%20they%20contain%2C%20researchers%20have%20only%20recently%20begun%20to%20automatically%20digitize%20and%20extract%20data%20from%20such%20maps%20as%20coherent%20information%2C%20rather%20than%20manually%20examine%20a%20raster%20image.%20However%2C%20these%20new%20approaches%20have%20focused%20on%20specific%20types%20of%20information%20that%20cannot%20be%20used%20directly%20for%20archaeological%20or%20heritage%20purposes.%20This%20paper%20provides%20a%20proof%20of%20concept%20of%20the%20application%20of%20deep%20learning%20techniques%20to%20extract%20archaeological%20information%20from%20historical%20maps%20in%20an%20automated%20manner.%20Early%20twentieth%20century%20colonial%20map%20series%20have%20been%20chosen%2C%20as%20they%20provide%20enough%20time%20depth%20to%20avoid%20many%20recent%20large-scale%20landscape%20modifications%20and%20cover%20very%20large%20areas%20%28comprising%20several%20countries%29.%20The%20use%20of%20common%20symbology%20and%20conventions%20enhance%20the%20applicability%20of%20the%20method.%20The%20results%20show%20deep%20learning%20to%20be%20an%20efficient%20tool%20for%20the%20recovery%20of%20georeferenced%2C%20archaeologically%20relevant%20information%20that%20is%20represented%20as%20conventional%20signs%2C%20line-drawings%20and%20text%20in%20historical%20maps.%20The%20method%20can%20provide%20excellent%20results%20when%20an%20adequate%20training%20dataset%20has%20been%20gathered%20and%20is%20therefore%20at%20its%20best%20when%20applied%20to%20the%20large%20map%20series%20that%20can%20supply%20such%20information.%20The%20deep%20learning%20approaches%20described%20here%20open%20up%20the%20possibility%20to%20map%20sites%20and%20features%20across%20entire%20map%20series%20much%20more%20quickly%20and%20coherently%20than%20other%20available%20methods%2C%20opening%20up%20the%20potential%20to%20reconstruct%20archaeological%20landscapes%20at%20continental%20scales.%22%2C%22date%22%3A%222021%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1002%5C%2Farp.1807%22%2C%22ISSN%22%3A%221099-0763%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1002%5C%2Farp.1807%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A57%3A19Z%22%7D%7D%2C%7B%22key%22%3A%223MEHB5PS%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Chen%20et%20al.%22%2C%22parsedDate%22%3A%222021%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BChen%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Flink.springer.com%5C%2Fchapter%5C%2F10.1007%5C%2F978-3-030-86337-1_34%26%23039%3B%26gt%3BVectorization%20of%20Historical%20Maps%20Using%20Deep%20Edge%20Filtering%20and%20Closed%20Shape%20Extraction%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Vectorization%20of%20Historical%20Maps%20Using%20Deep%20Edge%20Filtering%20and%20Closed%20Shape%20Extraction%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yizi%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Edwin%22%2C%22lastName%22%3A%22Carlinet%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Joseph%22%2C%22lastName%22%3A%22Chazalon%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cl%5Cu00e9ment%22%2C%22lastName%22%3A%22Mallet%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bertrand%22%2C%22lastName%22%3A%22Dum%5Cu00e9nieu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Julien%22%2C%22lastName%22%3A%22Perret%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Josep%22%2C%22lastName%22%3A%22Llad%5Cu00f3s%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Daniel%22%2C%22lastName%22%3A%22Lopresti%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Seiichi%22%2C%22lastName%22%3A%22Uchida%22%7D%5D%2C%22abstractNote%22%3A%22Maps%20have%20been%20a%20unique%20source%20of%20knowledge%20for%20centuries.%20Such%20historical%20documents%20provide%20invaluable%20information%20for%20analyzing%20the%20complex%20spatial%20transformation%20of%20landscapes%20over%20important%20time%20frames.%20This%20is%20particularly%20true%20for%20urban%20areas%20that%20encompass%20multiple%20interleaved%20research%20domains%20%28social%20sciences%2C%20economy%2C%20etc.%29.%20The%20large%20amount%20and%20significant%20diversity%20of%20map%20sources%20call%20for%20automatic%20image%20processing%20techniques%20in%20order%20to%20extract%20the%20relevant%20objects%20under%20a%20vectorial%20shape.%20The%20complexity%20of%20maps%20%28text%2C%20noise%2C%20digitization%20artifacts%2C%20etc.%29%20has%20hindered%20the%20capacity%20of%20proposing%20a%20versatile%20and%20efficient%20raster-to-vector%20approaches%20for%20decades.%20We%20propose%20a%20learnable%2C%20reproducible%2C%20and%20reusable%20solution%20for%20the%20automatic%20transformation%20of%20raster%20maps%20into%20vector%20objects%20%28building%20blocks%2C%20streets%2C%20rivers%29.%20It%20is%20built%20upon%20the%20complementary%20strength%20of%20mathematical%20morphology%20and%20convolutional%20neural%20networks%20through%20efficient%20edge%20filtering.%20Evenmore%2C%20we%20modify%20ConnNet%20and%20combine%20with%20deep%20edge%20filtering%20architecture%20to%20make%20use%20of%20pixel%20connectivity%20information%20and%20built%20an%20end-to-end%20system%20without%20requiring%20any%20post-processing%20techniques.%20In%20this%20paper%2C%20we%20focus%20on%20the%20comprehensive%20benchmark%20on%20various%20architectures%20on%20multiple%20datasets%20coupled%20with%20a%20novel%20vectorization%20step.%20Our%20experimental%20results%20on%20a%20new%20public%20dataset%20using%20COCO%20Panoptic%20metric%20exhibit%20very%20encouraging%20results%20confirmed%20by%20a%20qualitative%20analysis%20of%20the%20success%20and%20failure%20cases%20of%20our%20approach.%20Code%2C%20dataset%2C%20results%20and%20extra%20illustrations%20are%20freely%20available%20at%20https%3A%5C%2F%5C%2Fgithub.com%5C%2Fsoduco%5C%2FICDAR-2021-Vectorization.%22%2C%22date%22%3A%222021%22%2C%22proceedingsTitle%22%3A%22Document%20Analysis%20and%20Recognition%20%5Cu2013%20ICDAR%202021%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2F978-3-030-86337-1_34%22%2C%22ISBN%22%3A%22978-3-030-86337-1%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Flink.springer.com%5C%2Fchapter%5C%2F10.1007%5C%2F978-3-030-86337-1_34%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A16%3A25Z%22%7D%7D%2C%7B%22key%22%3A%22EC33IKWA%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Chen%20et%20al.%22%2C%22parsedDate%22%3A%222021%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BChen%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Flink.springer.com%5C%2Fchapter%5C%2F10.1007%5C%2F978-3-030-76657-3_5%26%23039%3B%26gt%3BCombining%20Deep%20Learning%20and%20Mathematical%20Morphology%20for%20Historical%20Map%20Segmentation%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Combining%20Deep%20Learning%20and%20Mathematical%20Morphology%20for%20Historical%20Map%20Segmentation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yizi%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Edwin%22%2C%22lastName%22%3A%22Carlinet%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Joseph%22%2C%22lastName%22%3A%22Chazalon%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cl%5Cu00e9ment%22%2C%22lastName%22%3A%22Mallet%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bertrand%22%2C%22lastName%22%3A%22Dum%5Cu00e9nieu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Julien%22%2C%22lastName%22%3A%22Perret%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Joakim%22%2C%22lastName%22%3A%22Lindblad%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Filip%22%2C%22lastName%22%3A%22Malmberg%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Nata%5Cu0161a%22%2C%22lastName%22%3A%22Sladoje%22%7D%5D%2C%22abstractNote%22%3A%22The%20digitization%20of%20historical%20maps%20enables%20the%20study%20of%20ancient%2C%20fragile%2C%20unique%2C%20and%20hardly%20accessible%20information%20sources.%20Main%20map%20features%20can%20be%20retrieved%20and%20tracked%20through%20the%20time%20for%20subsequent%20thematic%20analysis.%20The%20goal%20of%20this%20work%20is%20the%20vectorization%20step%2C%20i.e.%2C%20the%20extraction%20of%20vector%20shapes%20of%20the%20objects%20of%20interest%20from%20raster%20images%20of%20maps.%20We%20are%20particularly%20interested%20in%20closed%20shape%20detection%20such%20as%20buildings%2C%20building%20blocks%2C%20gardens%2C%20rivers%2C%20etc.%20in%20order%20to%20monitor%20their%20temporal%20evolution.%20Historical%20map%20images%20present%20significant%20pattern%20recognition%20challenges.%20The%20extraction%20of%20closed%20shapes%20by%20using%20traditional%20Mathematical%20Morphology%20%28MM%29%20is%20highly%20challenging%20due%20to%20the%20overlapping%20of%20multiple%20map%20features%20and%20texts.%20Moreover%2C%20state-of-the-art%20Convolutional%20Neural%20Networks%20%28CNN%29%20are%20perfectly%20designed%20for%20content%20image%20filtering%20but%20provide%20no%20guarantee%20about%20closed%20shape%20detection.%20Also%2C%20the%20lack%20of%20textural%20and%20color%20information%20of%20historical%20maps%20makes%20it%20hard%20for%20CNN%20to%20detect%20shapes%20that%20are%20represented%20by%20only%20their%20boundaries.%20Our%20contribution%20is%20a%20pipeline%20that%20combines%20the%20strengths%20of%20CNN%20%28efficient%20edge%20detection%20and%20filtering%29%20and%20MM%20%28guaranteed%20extraction%20of%20closed%20shapes%29%20in%20order%20to%20achieve%20such%20a%20task.%20The%20evaluation%20of%20our%20approach%20on%20a%20public%20dataset%20shows%20its%20effectiveness%20for%20extracting%20the%20closed%20boundaries%20of%20objects%20in%20historical%20maps.%22%2C%22date%22%3A%222021%22%2C%22proceedingsTitle%22%3A%22Discrete%20Geometry%20and%20Mathematical%20Morphology%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2F978-3-030-76657-3_5%22%2C%22ISBN%22%3A%22978-3-030-76657-3%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Flink.springer.com%5C%2Fchapter%5C%2F10.1007%5C%2F978-3-030-76657-3_5%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A15%3A52Z%22%7D%7D%2C%7B%22key%22%3A%22EVF393MF%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Petitpierre%20et%20al.%22%2C%22parsedDate%22%3A%222021%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BPetitpierre%2C%20R.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fceur-ws.org%5C%2FVol-2989%5C%2F%26%23039%3B%26gt%3BGeneric%20Semantic%20Segmentation%20of%20Historical%20Maps%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Generic%20Semantic%20Segmentation%20of%20Historical%20Maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22R%5Cu00e9mi%22%2C%22lastName%22%3A%22Petitpierre%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fr%5Cu00e9d%5Cu00e9ric%22%2C%22lastName%22%3A%22Kaplan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Isabella%22%2C%22lastName%22%3A%22di%20Lenardo%22%7D%5D%2C%22abstractNote%22%3A%22Research%20in%20automatic%20map%20processing%20is%20largely%20focused%20on%20homogeneous%20corpora%20or%20even%20individual%20maps%2C%20leading%20to%20inflexible%20models.%20Based%20on%20two%20new%20corpora%2C%20the%20first%20one%20centered%20on%20maps%20of%20Paris%20and%20the%20second%20one%20gathering%20maps%20of%20cities%20from%20all%20over%20the%20world%2C%20we%20present%20a%20method%20for%20computing%20the%20figurative%20diversity%20of%20cartographic%20collections.%20In%20a%20second%20step%2C%20we%20discuss%20the%20actual%20opportunities%20for%20CNN-based%20semantic%20segmentation%20of%20historical%20city%20maps.%20Through%20several%20experiments%2C%20we%20analyze%20the%20impact%20of%20figurative%20and%20cultural%20diversity%20on%20the%20segmentation%20performance.%20Finally%2C%20we%20highlight%20the%20potential%20for%20large-scale%20and%20generic%20algorithms.%20Training%20data%20and%20code%20of%20the%20described%20algorithms%20are%20made%20open-source%20and%20published%20with%20this%20article.%22%2C%22date%22%3A%222021%22%2C%22proceedingsTitle%22%3A%22CEUR%20Workshop%20Proceedings%22%2C%22conferenceName%22%3A%22CHR%202021%3A%20Computational%20Humanities%20Research%20Conference%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fceur-ws.org%5C%2FVol-2989%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A34%3A57Z%22%7D%7D%2C%7B%22key%22%3A%22KIH8G26J%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Maxwell%20et%20al.%22%2C%22parsedDate%22%3A%222020-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BMaxwell%2C%20A.E.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2072-4292%5C%2F12%5C%2F24%5C%2F4145%26%23039%3B%26gt%3BSemantic%20Segmentation%20Deep%20Learning%20for%20Extracting%20Surface%20Mine%20Extents%20from%20Historic%20Topographic%20Maps%26lt%3B%5C%2Fa%26gt%3B.%202020%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Semantic%20Segmentation%20Deep%20Learning%20for%20Extracting%20Surface%20Mine%20Extents%20from%20Historic%20Topographic%20Maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Aaron%20E.%22%2C%22lastName%22%3A%22Maxwell%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michelle%20S.%22%2C%22lastName%22%3A%22Bester%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Luis%20A.%22%2C%22lastName%22%3A%22Guillen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Christopher%20A.%22%2C%22lastName%22%3A%22Ramezan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dennis%20J.%22%2C%22lastName%22%3A%22Carpinello%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yiting%22%2C%22lastName%22%3A%22Fan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Faith%20M.%22%2C%22lastName%22%3A%22Hartley%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shannon%20M.%22%2C%22lastName%22%3A%22Maynard%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jaimee%20L.%22%2C%22lastName%22%3A%22Pyron%22%7D%5D%2C%22abstractNote%22%3A%22Historic%20topographic%20maps%2C%20which%20are%20georeferenced%20and%20made%20publicly%20available%20by%20the%20United%20States%20Geological%20Survey%20%28USGS%29%20and%20the%20National%20Map%5Cu2019s%20Historical%20Topographic%20Map%20Collection%20%28HTMC%29%2C%20are%20a%20valuable%20source%20of%20historic%20land%20cover%20and%20land%20use%20%28LCLU%29%20information%20that%20could%20be%20used%20to%20expand%20the%20historic%20record%20when%20combined%20with%20data%20from%20moderate%20spatial%20resolution%20Earth%20observation%20missions.%20This%20is%20especially%20true%20for%20landscape%20disturbances%20that%20have%20a%20long%20and%20complex%20historic%20record%2C%20such%20as%20surface%20coal%20mining%20in%20the%20Appalachian%20region%20of%20the%20eastern%20United%20States.%20In%20this%20study%2C%20we%20investigate%20this%20specific%20mapping%20problem%20using%20modified%20UNet%20semantic%20segmentation%20deep%20learning%20%28DL%29%2C%20which%20is%20based%20on%20convolutional%20neural%20networks%20%28CNNs%29%2C%20and%20a%20large%20example%20dataset%20of%20historic%20surface%20mine%20disturbance%20extents%20from%20the%20USGS%20Geology%2C%20Geophysics%2C%20and%20Geochemistry%20Science%20Center%20%28GGGSC%29.%20The%20primary%20objectives%20of%20this%20study%20are%20to%20%281%29%20evaluate%20model%20generalization%20to%20new%20geographic%20extents%20and%20topographic%20maps%20and%20%282%29%20to%20assess%20the%20impact%20of%20training%20sample%20size%2C%20or%20the%20number%20of%20manually%20interpreted%20topographic%20maps%2C%20on%20model%20performance.%20Using%20data%20from%20the%20state%20of%20Kentucky%2C%20our%20findings%20suggest%20that%20DL%20semantic%20segmentation%20can%20detect%20surface%20mine%20disturbance%20features%20from%20topographic%20maps%20with%20a%20high%20level%20of%20accuracy%20%28Dice%20coefficient%20%3D%200.902%29%20and%20relatively%20balanced%20omission%20and%20commission%20error%20rates%20%28Precision%20%3D%200.891%2C%20Recall%20%3D%200.917%29.%20When%20the%20model%20is%20applied%20to%20new%20topographic%20maps%20in%20Ohio%20and%20Virginia%20to%20assess%20generalization%2C%20model%20performance%20decreases%3B%20however%2C%20performance%20is%20still%20strong%20%28Ohio%20Dice%20coefficient%20%3D%200.837%20and%20Virginia%20Dice%20coefficient%20%3D%200.763%29.%20Further%2C%20when%20reducing%20the%20number%20of%20topographic%20maps%20used%20to%20derive%20training%20image%20chips%20from%2084%20to%2015%2C%20model%20performance%20was%20only%20slightly%20reduced%2C%20suggesting%20that%20models%20that%20generalize%20well%20to%20new%20data%20and%20geographic%20extents%20may%20not%20require%20a%20large%20training%20set.%20We%20suggest%20the%20incorporation%20of%20DL%20semantic%20segmentation%20methods%20into%20applied%20workflows%20to%20decrease%20manual%20digitizing%20labor%20requirements%20and%20call%20for%20additional%20research%20associated%20with%20applying%20semantic%20segmentation%20methods%20to%20alternative%20cartographic%20representations%20to%20supplement%20research%20focused%20on%20multispectral%20image%20analysis%20and%20classification.%22%2C%22date%22%3A%222020%5C%2F1%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Frs12244145%22%2C%22ISSN%22%3A%222072-4292%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2072-4292%5C%2F12%5C%2F24%5C%2F4145%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A13%3A11Z%22%7D%7D%2C%7B%22key%22%3A%22L667SAEA%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Heitzler%20and%20Hurni%22%2C%22parsedDate%22%3A%222020%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BHeitzler%2C%20M.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1111%5C%2Ftgis.12610%26%23039%3B%26gt%3BCartographic%20reconstruction%20of%20building%20footprints%20from%20historical%20maps%3A%20A%20study%20on%20the%20Swiss%20Siegfried%20map%26lt%3B%5C%2Fa%26gt%3B.%202020%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Cartographic%20reconstruction%20of%20building%20footprints%20from%20historical%20maps%3A%20A%20study%20on%20the%20Swiss%20Siegfried%20map%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Magnus%22%2C%22lastName%22%3A%22Heitzler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22Extracting%20features%20from%20printed%20maps%20has%20been%20a%20challenge%20for%20decades%3B%20historical%20maps%20pose%20an%20even%20larger%20problem%20due%20to%20manual%2C%20inconsistent%20drawing%20or%20scribing%2C%20low%20printing%20quality%2C%20and%20geometrical%20distortions.%20In%20this%20article%2C%20a%20new%20workflow%20is%20introduced%2C%20consisting%20of%20a%20segmentation%20step%20and%20a%20vectorization%20step%20to%20acquire%20high-quality%20polygon%20representations%20of%20building%20footprints%20from%20the%20Siegfried%20map%20series.%20For%20segmentation%2C%20an%20ensemble%20of%20U-Nets%20is%20trained%2C%20yielding%20pixel-based%20predictions%20with%20an%20average%20intersection%20over%20union%20of%2088.2%25%20and%20an%20average%20precision%20of%2098.55%25.%20For%20vectorization%2C%20methods%20based%20on%20contour%20tracing%20and%20orientation-based%20clustering%20are%20proposed%20to%20approximate%20idealized%20polygonal%20representations.%20The%20workflow%20has%20been%20tested%20on%2010%20randomly%20selected%20map%20sheets%20from%20the%20Siegfried%20map%2C%20showing%20that%20the%20time%20required%20to%20manually%20correct%20these%20polygons%20drops%20to%20about%2045%20min%20per%20map%20sheet.%20Of%20this%20sample%2C%20approximately%2010%25%20of%20buildings%20required%20manual%20corrections.%20This%20workflow%20can%20serve%20as%20a%20blueprint%20for%20similar%20vectorization%20efforts.%22%2C%22date%22%3A%222020%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1111%5C%2Ftgis.12610%22%2C%22ISSN%22%3A%221467-9671%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1111%5C%2Ftgis.12610%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A04%3A43Z%22%7D%7D%5D%7D
Arzoumanidis, L. et al. Automatic Uncertainty-Aware Synthetic Data Bootstrapping for Historical Map Segmentation. 2025
Duan, W. et al. DIGMAPPER: A Modular System for Automated Geologic Map Digitization. 2025
Yuan, Y. et al. Leveraging LLMs and attention-mechanism for automatic annotation of historical maps. 2025
López-Rauhut, M. et al. Segmenting France Across Four Centuries. 2025
Yuan, Y. et al. Semantic segmentation of time-series of historical maps by learning from only one map. 2025
Vu, T. et al. Advancing Geopolitical Map Analysis: An Intelligent System for Territorial Integrity Verification. 2025
Arzoumanidis, L. et al. Semantic segmentation of historical maps using Self-Constructing Graph Convolutional Networks. 2025
Chen, Y. et al. Automatic vectorization of historical maps: A benchmark. 2024
Mäyrä, J. et al. Utilizing historical maps in identification of long-term land use and land cover changes. 2023
Baloun, J. et al. FCN-Boosted Historical Map Segmentation with Little Training Data. 2023
Lenc, L. et al. Towards Historical Map Analysis Using Deep Learning Techniques. 2023
Luo, S. et al. Critical Minerals Map Feature Extraction Using Deep Learning. 2023
Zhao, Y. et al. Building Block Extraction from Historical Maps Using Deep Object Attention Networks. 2022
Xydas, C. et al. Buildings Extraction from Historical Topographic Maps via a Deep Convolution Neural Network. 2022
Farmakis-Serebryakova, M. et al. Terrain Segmentation Using a U-Net for Improved Relief Shading. 2022
Schnürer, R. et al. Instance Segmentation, Body Part Parsing, and Pose Estimation of Human Figures in Pictorial Maps. 2022
Soliman, A. et al. Weakly Supervised Segmentation of Buildings in Digital Elevation Models. 2022
Schnürer, R. et al. Detection of Pictorial Map Objects with Convolutional Neural Networks. 2021
Garcia-Molsosa, A. et al. Potential of deep learning segmentation for the extraction of archaeological features from historical map series. 2021
Chen, Y. et al. Vectorization of Historical Maps Using Deep Edge Filtering and Closed Shape Extraction. 2021
Chen, Y. et al. Combining Deep Learning and Mathematical Morphology for Historical Map Segmentation. 2021
Petitpierre, R. et al. Generic Semantic Segmentation of Historical Maps. 2021
Maxwell, A.E. et al. Semantic Segmentation Deep Learning for Extracting Surface Mine Extents from Historic Topographic Maps. 2020
Feature Extraction (Labels)
5447768
feature extraction, labels
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22XSP9ME9D%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Lin%20et%20al.%22%2C%22parsedDate%22%3A%222026%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLin%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Flink.springer.com%5C%2F10.1007%5C%2F978-3-032-04617-8_4%26%23039%3B%26gt%3BLIGHT%3A%20Multi-modal%20Text%20Linking%20on%20Historical%20Maps%26lt%3B%5C%2Fa%26gt%3B.%202026%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22bookSection%22%2C%22title%22%3A%22LIGHT%3A%20Multi-modal%20Text%20Linking%20on%20Historical%20Maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Xu-Cheng%22%2C%22lastName%22%3A%22Yin%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Dimosthenis%22%2C%22lastName%22%3A%22Karatzas%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Daniel%22%2C%22lastName%22%3A%22Lopresti%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yijun%22%2C%22lastName%22%3A%22Lin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rhett%22%2C%22lastName%22%3A%22Olson%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Junhan%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yao-Yi%22%2C%22lastName%22%3A%22Chiang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jerod%22%2C%22lastName%22%3A%22Weinman%22%7D%5D%2C%22abstractNote%22%3A%22Text%20on%20historical%20maps%20provides%20valuable%20information%20for%20studies%20in%20history%2C%20economics%2C%20geography%2C%20and%20other%20related%20fields.%20Unlike%20structured%20or%20semi-structured%20documents%2C%20text%20on%20maps%20varies%20significantly%20in%20orientation%2C%20reading%20order%2C%20shape%2C%20and%20placement.%20Many%20modern%20methods%20can%20detect%20and%20transcribe%20text%20regions%2C%20but%20they%20struggle%20to%20effectively%20%60%60link%26%23039%3B%26%23039%3B%20the%20recognized%20text%20fragments%2C%20e.g.%2C%20determining%20a%20multi-word%20place%20name.%20Existing%20layout%20analysis%20methods%20model%20word%20relationships%20to%20improve%20text%20understanding%20in%20structured%20documents%2C%20but%20they%20primarily%20rely%20on%20linguistic%20features%20and%20neglect%20geometric%20information%2C%20which%20is%20essential%20for%20handling%20map%20text.%20To%20address%20these%20challenges%2C%20we%20propose%20LIGHT%2C%20a%20novel%20multi-modal%20approach%20that%20integrates%20linguistic%2C%20image%2C%20and%20geometric%20features%20for%20linking%20text%20on%20historical%20maps.%20In%20particular%2C%20LIGHT%20includes%20a%20geometry-aware%20embedding%20module%20that%20encodes%20the%20polygonal%20coordinates%20of%20text%20regions%20to%20capture%20polygon%20shapes%20and%20their%20relative%20spatial%20positions%20on%20an%20image.%20LIGHT%20unifies%20this%20geometric%20information%20with%20the%20visual%20and%20linguistic%20token%20embeddings%20from%20LayoutLMv3%2C%20a%20pretrained%20layout%20analysis%20model.%20LIGHT%20uses%20the%20cross-modal%20information%20to%20predict%20the%20reading-order%20successor%20of%20each%20text%20instance%20directly%20with%20a%20bi-directional%20learning%20strategy%20that%20enhances%20sequence%20robustness.%20Experimental%20results%20show%20that%20LIGHT%20outperforms%20existing%20methods%20on%20the%20ICDAR%202024%5C%2F2025%20MapText%20Competition%20data%2C%20demonstrating%20the%20effectiveness%20of%20multi-modal%20learning%20for%20historical%20map%20text%20linking.%22%2C%22bookTitle%22%3A%22Document%20Analysis%20and%20Recognition%20%5Cu2013%20ICDAR%202025%22%2C%22date%22%3A%222026%22%2C%22language%22%3A%22en%22%2C%22ISBN%22%3A%22978-3-032-04616-1%20978-3-032-04617-8%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Flink.springer.com%5C%2F10.1007%5C%2F978-3-032-04617-8_4%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-12T12%3A11%3A49Z%22%7D%7D%2C%7B%22key%22%3A%22D6NXN2FU%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zou%20et%20al.%22%2C%22parsedDate%22%3A%222025-04-16%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZou%2C%20M.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.researchsquare.com%5C%2Farticle%5C%2Frs-6330456%5C%2Fv1%26%23039%3B%26gt%3BRecognizing%20and%20Sequencing%20Multi-word%20Texts%20in%20Maps%20Using%20an%20Attentive%20Pointer%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Recognizing%20and%20Sequencing%20Multi-word%20Texts%20in%20Maps%20Using%20an%20Attentive%20Pointer%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mengjie%22%2C%22lastName%22%3A%22Zou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tianhao%22%2C%22lastName%22%3A%22Dai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22R%5Cu00e9mi%22%2C%22lastName%22%3A%22Petitpierre%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Beatrice%22%2C%22lastName%22%3A%22Vaienti%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fr%5Cu00e9d%5Cu00e9ric%22%2C%22lastName%22%3A%22Kaplan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Isabella%20di%22%2C%22lastName%22%3A%22Lenardo%22%7D%5D%2C%22abstractNote%22%3A%22Extracting%20and%20recognizing%20texts%20from%20historical%20maps%20presents%20significant%20challenges%20due%20to%20complex%20layouts%2C%20varied%20typographic%20conventions%2C%20and%20the%20entanglement%20of%20multiple%20sequences.%20In%20this%20paper%2C%20we%20present%20a%20modular%20neural%20framework%20for%20linking%20and%20ordering%20text%20segments%20together.%20This%20task%20goes%20beyond%20simple%20word%20recognition%3B%20it%20enables%20to%20recover%20the%20complete%20text%20sequences.%20Our%20solution%2C%20based%20on%20an%20Attentive%20Pointer%2C%20successfully%20manages%20the%20presence%20of%20distractor%20words.%20It%20leverages%20both%20positional%20and%20B%5Cu00e9zier%20directional%20features.%20We%20demonstrate%20the%20effectiveness%20of%20our%20framework%20with%20two%20practical%20applications.%20First%2C%20we%20prove%20its%20scalability%20by%20applying%20it%20to%20the%201890s%20Ordnance%20Survey%20of%20London%2C%20retrieving%20285%2C846%20text%20sequences.%20Second%2C%20we%20validate%20the%20practical%20effectiveness%20of%20the%20sequenced%20placenames%20by%20geocoding%20them%20and%20showcasing%20their%20capability%20to%20automate%20city%20maps%20realignment.%20Our%20approach%20is%20scalable%2C%20trainable%2C%20and%20generic.%20It%20supports%20hierarchical%20integration%20and%20multimodal%20feature%20fusion%20by%20design%2C%20making%20it%20an%20extensible%20and%20modular%20framework%20for%20further%20advancements.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22Research%20Square%22%2C%22archiveID%22%3A%22%22%2C%22date%22%3A%222025-04-16%22%2C%22DOI%22%3A%2210.21203%5C%2Frs.3.rs-6330456%5C%2Fv1%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.researchsquare.com%5C%2Farticle%5C%2Frs-6330456%5C%2Fv1%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-29T21%3A00%3A34Z%22%7D%7D%2C%7B%22key%22%3A%2223GMZZC4%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ren%20et%20al.%22%2C%22parsedDate%22%3A%222025-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BRen%2C%20J.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2072-4292%5C%2F17%5C%2F2%5C%2F204%26%23039%3B%26gt%3BHI-CMAIM%3A%20Hybrid%20Intelligence-Based%20Multi-Source%20Unstructured%20Chinese%20Map%20Annotation%20Interpretation%20Model%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22HI-CMAIM%3A%20Hybrid%20Intelligence-Based%20Multi-Source%20Unstructured%20Chinese%20Map%20Annotation%20Interpretation%20Model%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiaxin%22%2C%22lastName%22%3A%22Ren%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wanzeng%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jun%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiuli%22%2C%22lastName%22%3A%22Zhu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ran%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tingting%22%2C%22lastName%22%3A%22Zhao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiadong%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yuan%22%2C%22lastName%22%3A%22Tao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shunxi%22%2C%22lastName%22%3A%22Yin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xi%22%2C%22lastName%22%3A%22Zhai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yunlu%22%2C%22lastName%22%3A%22Peng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xinpeng%22%2C%22lastName%22%3A%22Wang%22%7D%5D%2C%22abstractNote%22%3A%22Map%20annotation%20interpretation%20is%20crucial%20for%20geographic%20information%20extraction%20and%20intelligent%20map%20analysis.%20This%20study%20addresses%20the%20challenges%20associated%20with%20interpreting%20Chinese%20map%20annotations%2C%20specifically%20visual%20complexity%20and%20data%20scarcity%20issues%2C%20by%20proposing%20a%20hybrid%20intelligence-based%20multi-source%20unstructured%20Chinese%20map%20annotation%20interpretation%20method%20%28HI-CMAIM%29.%20Firstly%2C%20leveraging%20expert%20knowledge%20in%20an%20innovative%20way%2C%20we%20constructed%20a%20high-quality%20expert%20knowledge-based%20map%20annotation%20dataset%20%28EKMAD%29%2C%20which%20significantly%20enhanced%20data%20diversity%20and%20accuracy.%20Furthermore%2C%20an%20improved%20annotation%20detection%20model%20%28CMA-DB%29%20and%20an%20improved%20annotation%20recognition%20model%20%28CMA-CRNN%29%20were%20designed%20based%20on%20the%20characteristics%20of%20map%20annotations%2C%20both%20incorporating%20expert%20knowledge.%20A%20two-stage%20transfer%20learning%20strategy%20was%20employed%20to%20tackle%20the%20issue%20of%20limited%20training%20samples.%20Experimental%20results%20demonstrated%20the%20superiority%20of%20HI-CMAIM%20over%20existing%20algorithms.%20In%20the%20detection%20task%2C%20CMA-DB%20achieved%20an%208.54%25%20improvement%20in%20Hmean%20%28from%2087.73%25%20to%2096.27%25%29%20compared%20to%20the%20DB%20algorithm.%20In%20the%20recognition%20task%2C%20CMA-CRNN%20achieved%20a%2015.54%25%20improvement%20in%20accuracy%20%28from%2079.77%25%20to%2095.31%25%29%20and%20a%204-fold%20reduction%20in%20NED%20%28from%200.1026%20to%200.0242%29%2C%20confirming%20the%20effectiveness%20and%20advancement%20of%20the%20proposed%20method.%20This%20research%20not%20only%20provides%20a%20novel%20approach%20and%20data%20support%20for%20Chinese%20map%20annotation%20interpretation%20but%20also%20fills%20the%20gap%20of%20high-quality%2C%20diverse%20datasets.%20It%20holds%20practical%20application%20value%20in%20fields%20such%20as%20geographic%20information%20systems%20and%20cartography%2C%20significantly%20contributing%20to%20the%20advancement%20of%20intelligent%20map%20interpretation.%22%2C%22date%22%3A%222025%5C%2F1%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Frs17020204%22%2C%22ISSN%22%3A%222072-4292%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2072-4292%5C%2F17%5C%2F2%5C%2F204%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-29T20%3A53%3A45Z%22%7D%7D%2C%7B%22key%22%3A%22FWXLQ3VJ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Pradhan%20et%20al.%22%2C%22parsedDate%22%3A%222025%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BPradhan%2C%20A.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-981-97-6465-5_15%26%23039%3B%26gt%3BAn%20Effort%20Toward%20Localization%20and%20Recognition%20of%20Elevation%20Values%20in%20a%20Topographic%20Sheet%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22An%20Effort%20Toward%20Localization%20and%20Recognition%20of%20Elevation%20Values%20in%20a%20Topographic%20Sheet%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ashis%22%2C%22lastName%22%3A%22Pradhan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sneha%22%2C%22lastName%22%3A%22Supriya%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mohan%20P.%22%2C%22lastName%22%3A%22Pradhan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ratika%22%2C%22lastName%22%3A%22Pradhan%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Sourav%22%2C%22lastName%22%3A%22Dhar%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Subhas%22%2C%22lastName%22%3A%22Mukhopadhyay%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Dinh-Thuan%22%2C%22lastName%22%3A%22Do%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Samarendra%20Nath%22%2C%22lastName%22%3A%22Sur%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Agbotiname%20Lucky%22%2C%22lastName%22%3A%22Imoize%22%7D%5D%2C%22abstractNote%22%3A%22This%20study%20focuses%20on%20efficient%20elevation%20value%20localization%20and%20recognition%20in%20topographic%20sheets%20%28TS%29%20through%20morphological%20operations%20and%20YOLO-based%20deep%20learning.%20The%20aim%20is%20to%20enhance%20the%20digitization%20process%2C%20crucial%20for%20creating%20Digital%20Elevation%20Models%20%28DEMs%29%20widely%20used%20in%20various%20applications.%22%2C%22date%22%3A%222025%22%2C%22proceedingsTitle%22%3A%22Advances%20in%20Communication%2C%20Devices%20and%20Networking%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2F978-981-97-6465-5_15%22%2C%22ISBN%22%3A%22978-981-97-6465-5%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-981-97-6465-5_15%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-21T13%3A41%3A14Z%22%7D%7D%2C%7B%22key%22%3A%223TWT3EKD%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ma%20et%20al.%22%2C%22parsedDate%22%3A%222024-11-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BMa%2C%20M.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2024.2305473%26%23039%3B%26gt%3BSemantic-aware%20automatic%20extraction%20method%20for%20bottom%20sediment%20annotations%20in%20raster%20nautical%20charts%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Semantic-aware%20automatic%20extraction%20method%20for%20bottom%20sediment%20annotations%20in%20raster%20nautical%20charts%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mengkai%22%2C%22lastName%22%3A%22Ma%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dong%20%2CJian%22%2C%22lastName%22%3A%22%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tang%20%2CLulu%22%2C%22lastName%22%3A%22%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zimeng%22%2C%22lastName%22%3A%22and%20Wang%22%7D%5D%2C%22abstractNote%22%3A%22The%20automatic%20extraction%20of%20bottom%20sediment%20annotations%20in%20large-scale%20raster%20nautical%20charts%20has%20limitations%2C%20including%20an%20imprecise%20semantic%20information%20description%20and%20low%20efficiency.%20To%20overcome%20them%2C%20we%20propose%20a%20convolutional%20neural%20network%20%28CNN%29-based%20method%20for%20the%20automatic%20extraction%20of%20bottom%20sediment%20annotations%20in%20raster%20nautical%20charts%2C%20using%20image%20processing%20techniques%20to%20improve%20it.%20First%2C%20an%20adaptive%20chart%20partitioning%20model%20that%20considers%20element%20completeness%20is%20constructed.%20Second%2C%20a%20principle%20for%20the%20unique%20identification%20of%20elements%20based%20on%20spatial%20conflicts%20is%20designed.%20Finally%2C%20a%20model%20for%20accurately%20extracting%20semantic%20information%20for%20bottom%20sediment%20annotations%20is%20established.%20To%20evaluate%20the%20effectiveness%20of%20the%20proposed%20method%2C%20we%20implemented%20a%20model%20based%20on%20the%20PyTorch%20framework%20and%20used%20the%20PIL%20library%20to%20analyze%20the%20results.%20We%20also%20conducted%20comparative%20experiments%20on%20multiple%20CNN%20models%20to%20recommend%20the%20selection%20of%20such%20models%20in%20the%20proposed%20method%20by%20comparing%20their%20classification%20and%20recognition%20performance.%20The%20experimental%20results%20indicate%20that%20%281%29%20the%20proposed%20model%20can%20achieve%20high-precision%20extraction%20of%20bottom%20sediment%20annotations%20in%20raster%20nautical%20charts.%20%282%29%20Furthermore%2C%20the%20proposed%20model%20generally%20has%20high%20recognition%20accuracy%20and%20semantic%20completeness%2C%20with%20better%20recognition%20precision%20than%20traditional%20pattern%20recognition%20methods.%22%2C%22date%22%3A%222024-11-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2024.2305473%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2024.2305473%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-03-20T22%3A49%3A34Z%22%7D%7D%2C%7B%22key%22%3A%2275ZN32Q6%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Lin%20and%20Chiang%22%2C%22parsedDate%22%3A%222024-08-24%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLin%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3637528.3671589%26%23039%3B%26gt%3BHyper-Local%20Deformable%20Transformers%20for%20Text%20Spotting%20on%20Historical%20Maps%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Hyper-Local%20Deformable%20Transformers%20for%20Text%20Spotting%20on%20Historical%20Maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yijun%22%2C%22lastName%22%3A%22Lin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yao-Yi%22%2C%22lastName%22%3A%22Chiang%22%7D%5D%2C%22abstractNote%22%3A%22Text%20on%20historical%20maps%20contains%20valuable%20information%20providing%20georeferenced%20historical%2C%20political%2C%20and%20cultural%20contexts.%20However%2C%20text%20extraction%20from%20historical%20maps%20has%20been%20challenging%20due%20to%20the%20lack%20of%20%281%29%20effective%20methods%20and%20%282%29%20training%20data.%20Previous%20approaches%20use%20ad-hoc%20steps%20tailored%20to%20only%20specific%20map%20styles.%20Recent%20machine%20learning-based%20text%20spotters%20%28e.g.%2C%20for%20scene%20images%29%20have%20the%20potential%20to%20solve%20these%20challenges%20because%20of%20their%20flexibility%20in%20supporting%20various%20types%20of%20text%20instances.%20However%2C%20these%20methods%20remain%20challenges%20in%20extracting%20precise%20image%20features%20for%20predicting%20every%20sub-component%20%28boundary%20points%20and%20characters%29%20in%20a%20text%20instance.%20This%20is%20critical%20because%20map%20text%20can%20be%20lengthy%20and%20highly%20rotated%20with%20complex%20backgrounds%2C%20posing%20difficulties%20in%20detecting%20relevant%20image%20features%20from%20a%20rough%20text%20region.%20This%20paper%20proposes%20PALETTE%2C%20an%20end-to-end%20text%20spotter%20for%20scanned%20historical%20maps%20of%20a%20wide%20variety.%20PALETTE%20introduces%20a%20novel%20hyper-local%20sampling%20module%20to%20explicitly%20learn%20localized%20image%20features%20around%20the%20target%20boundary%20points%20and%20characters%20of%20a%20text%20instance%20for%20detection%20and%20recognition.%20PALETTE%20also%20enables%20hyper-local%20positional%20embeddings%20to%20learn%20spatial%20interactions%20between%20boundary%20points%20and%20characters%20within%20and%20across%20text%20instances.%20In%20addition%2C%20this%20paper%20presents%20a%20novel%20approach%20to%20automatically%20generate%20synthetic%20map%20images%2C%20SYNTHMAP%2B%2C%20for%20training%20text%20spotters%20for%20historical%20maps.%20The%20experiment%20shows%20that%20PALETTE%20with%20SYNTHMAP%2B%20outperforms%20SOTA%20text%20spotters%20on%20two%20new%20benchmark%20datasets%20of%20historical%20maps%2C%20particularly%20for%20long%20and%20angled%20text.%20We%20have%20deployed%20PALETTE%20with%20SYNTHMAP%2B%20to%20process%20over%2060%2C000%20maps%20in%20the%20David%20Rumsey%20Historical%20Map%20collection%20and%20generated%20over%20100%20million%20text%20labels%20to%20support%20map%20searching.%22%2C%22date%22%3A%22August%2024%2C%202024%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%2030th%20ACM%20SIGKDD%20Conference%20on%20Knowledge%20Discovery%20and%20Data%20Mining%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3637528.3671589%22%2C%22ISBN%22%3A%229798400704901%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3637528.3671589%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-08-05T22%3A24%3A37Z%22%7D%7D%2C%7B%22key%22%3A%22WPYZHV68%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Lin%20et%20al.%22%2C%22parsedDate%22%3A%222023-06-28%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLin%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fjournals.plos.org%5C%2Fplosone%5C%2Farticle%3Fid%3D10.1371%5C%2Fjournal.pone.0286340%26%23039%3B%26gt%3BCreating%20building-level%2C%20three-dimensional%20digital%20models%20of%20historic%20urban%20neighborhoods%20from%20Sanborn%20Fire%20Insurance%20maps%20using%20machine%20learning%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Creating%20building-level%2C%20three-dimensional%20digital%20models%20of%20historic%20urban%20neighborhoods%20from%20Sanborn%20Fire%20Insurance%20maps%20using%20machine%20learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yue%22%2C%22lastName%22%3A%22Lin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jialin%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Adam%22%2C%22lastName%22%3A%22Porr%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gerika%22%2C%22lastName%22%3A%22Logan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ningchuan%22%2C%22lastName%22%3A%22Xiao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Harvey%20J.%22%2C%22lastName%22%3A%22Miller%22%7D%5D%2C%22abstractNote%22%3A%22Sanborn%20Fire%20Insurance%20maps%20contain%20a%20wealth%20of%20building-level%20information%20about%20U.S.%20cities%20dating%20back%20to%20the%20late%2019th%20century.%20They%20are%20a%20valuable%20resource%20for%20studying%20changes%20in%20urban%20environments%2C%20such%20as%20the%20legacy%20of%20urban%20highway%20construction%20and%20urban%20renewal%20in%20the%2020th%20century.%20However%2C%20it%20is%20a%20challenge%20to%20automatically%20extract%20the%20building-level%20information%20effectively%20and%20efficiently%20from%20Sanborn%20maps%20because%20of%20the%20large%20number%20of%20map%20entities%20and%20the%20lack%20of%20appropriate%20computational%20methods%20to%20detect%20these%20entities.%20This%20paper%20contributes%20to%20a%20scalable%20workflow%20that%20utilizes%20machine%20learning%20to%20identify%20building%20footprints%20and%20associated%20properties%20on%20Sanborn%20maps.%20This%20information%20can%20be%20effectively%20applied%20to%20create%203D%20visualization%20of%20historic%20urban%20neighborhoods%20and%20inform%20urban%20changes.%20We%20demonstrate%20our%20methods%20using%20Sanborn%20maps%20for%20two%20neighborhoods%20in%20Columbus%2C%20Ohio%2C%20USA%20that%20were%20bisected%20by%20highway%20construction%20in%20the%201960s.%20Quantitative%20and%20visual%20analysis%20of%20the%20results%20suggest%20high%20accuracy%20of%20the%20extracted%20building-level%20information%2C%20with%20an%20F-1%20score%20of%200.9%20for%20building%20footprints%20and%20construction%20materials%2C%20and%20over%200.7%20for%20building%20utilizations%20and%20numbers%20of%20stories.%20We%20also%20illustrate%20how%20to%20visualize%20pre-highway%20neighborhoods.%22%2C%22date%22%3A%2228.06.2023%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1371%5C%2Fjournal.pone.0286340%22%2C%22ISSN%22%3A%221932-6203%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fjournals.plos.org%5C%2Fplosone%5C%2Farticle%3Fid%3D10.1371%5C%2Fjournal.pone.0286340%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-12T22%3A51%3A58Z%22%7D%7D%2C%7B%22key%22%3A%22ZG77IJ49%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Huang%20et%20al.%22%2C%22parsedDate%22%3A%222023-03-16%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BHuang%2C%20W.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F12%5C%2F3%5C%2F128%26%23039%3B%26gt%3BLeveraging%20Deep%20Convolutional%20Neural%20Network%20for%20Point%20Symbol%20Recognition%20in%20Scanned%20Topographic%20Maps%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Leveraging%20Deep%20Convolutional%20Neural%20Network%20for%20Point%20Symbol%20Recognition%20in%20Scanned%20Topographic%20Maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenjun%22%2C%22lastName%22%3A%22Huang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qun%22%2C%22lastName%22%3A%22Sun%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Anzhu%22%2C%22lastName%22%3A%22Yu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenyue%22%2C%22lastName%22%3A%22Guo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qing%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bowei%22%2C%22lastName%22%3A%22Wen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Li%22%2C%22lastName%22%3A%22Xu%22%7D%5D%2C%22abstractNote%22%3A%22Point%20symbols%20on%20a%20scanned%20topographic%20map%20%28STM%29%20provide%20crucial%20geographic%20information.%20However%2C%20point%20symbol%20recognition%20entails%20high%20complexity%20and%20uncertainty%20owing%20to%20the%20stickiness%20of%20map%20elements%20and%20singularity%20of%20symbol%20structures.%20Therefore%2C%20extracting%20point%20symbols%20from%20STMs%20is%20challenging.%20Currently%2C%20point%20symbol%20recognition%20is%20performed%20primarily%20through%20pattern%20recognition%20methods%20that%20have%20low%20accuracy%20and%20efficiency.%20To%20address%20this%20problem%2C%20we%20investigated%20the%20potential%20of%20a%20deep%20learning-based%20method%20for%20point%20symbol%20recognition%20and%20proposed%20a%20deep%20convolutional%20neural%20network%20%28DCNN%29-based%20model%20for%20this%20task.%20We%20created%20point%20symbol%20datasets%20from%20different%20sources%20for%20training%20and%20prediction%20models.%20Within%20this%20framework%2C%20atrous%20spatial%20pyramid%20pooling%20%28ASPP%29%20was%20adopted%20to%20handle%20the%20recognition%20difficulty%20owing%20to%20the%20differences%20between%20point%20symbols%20and%20natural%20objects.%20To%20increase%20the%20positioning%20accuracy%2C%20the%20k-means%2B%2B%20clustering%20method%20was%20used%20to%20generate%20anchor%20boxes%20that%20were%20more%20suitable%20for%20our%20point%20symbol%20datasets.%20Additionally%2C%20to%20improve%20the%20generalization%20ability%20of%20the%20model%2C%20we%20designed%20two%20data%20augmentation%20methods%20to%20adapt%20to%20symbol%20recognition.%20Experiments%20demonstrated%20that%20the%20deep%20learning%20method%20considerably%20improved%20the%20recognition%20accuracy%20and%20efficiency%20compared%20with%20classical%20algorithms.%20The%20introduction%20of%20ASPP%20in%20the%20object%20detection%20algorithm%20resulted%20in%20higher%20mean%20average%20precision%20and%20intersection%20over%20union%20values%2C%20indicating%20a%20higher%20recognition%20accuracy.%20It%20is%20also%20demonstrated%20that%20data%20augmentation%20methods%20can%20alleviate%20the%20cross-domain%20problem%20and%20improve%20the%20rotation%20robustness.%20This%20study%20contributes%20to%20the%20development%20of%20algorithms%20and%20the%20evaluation%20of%20geographic%20elements%20extracted%20from%20STMs.%22%2C%22date%22%3A%222023-03-16%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi12030128%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F12%5C%2F3%5C%2F128%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-10-17T18%3A03%3A32Z%22%7D%7D%2C%7B%22key%22%3A%22ENG6PXLU%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhai%20et%20al.%22%2C%22parsedDate%22%3A%222023-03-03%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZhai%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F12%5C%2F3%5C%2F106%26%23039%3B%26gt%3BModel%20and%20Data%20Integrated%20Transfer%20Learning%20for%20Unstructured%20Map%20Text%20Detection%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Model%20and%20Data%20Integrated%20Transfer%20Learning%20for%20Unstructured%20Map%20Text%20Detection%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yanrui%22%2C%22lastName%22%3A%22Zhai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiran%22%2C%22lastName%22%3A%22Zhou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Honghao%22%2C%22lastName%22%3A%22Li%22%7D%5D%2C%22abstractNote%22%3A%22The%20emergence%20of%20the%20third%20information%20wave%20makes%20extensive%20maps%20available%20to%20be%20generated%20by%20volunteered%20ways%2C%20never%20specially%20designed%20and%20generated%20by%20professional%20institutes%20alone.%20These%20large-scale%20images-based%20volunteered%20maps%20created%20by%20the%20public%20provide%20plentiful%20geographical%20information%20regarding%20a%20place%20while%20posing%20a%20challenge%20for%20recognizing%20the%20unstructured%20text%20in%20these%20maps%20for%20previous%20approaches%20to%20standard%20map%20text%20detection.%20Map%20text%20or%20map%20annotations%20denote%20the%20critical%20element%20of%20map%20content.%20To%20achieve%20the%20detection%20of%20unstructured%20map%20text%2C%20this%20paper%20proposed%20an%20integrated%20data-based%20and%20model-based%20transfer%20learning%20model%2C%20which%20mainly%20respectively%20included%20data%20augmentation%20techniques%20and%20adaptive%20fine-tuning%2C%20to%20reinforce%20the%20state-of-the-art%20CNNs%20by%20transferring%20the%20OCR%20knowledge%20for%20detecting%20the%20unstructured%20text%20units%20in%20volunteered%20maps.%20The%20experiment%20proved%20that%20our%20proposed%20framework%20can%20effectively%20reinforce%20the%20state-of-the-art%20CNN%20in%20detecting%20unstructured%20map%20text.%20We%20hope%20our%20research%20results%20can%20contribute%20to%20unstructured%20map%20text%20detection%20and%20recognition.%22%2C%22date%22%3A%222023-03-03%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi12030106%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F12%5C%2F3%5C%2F106%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-10-17T17%3A40%3A56Z%22%7D%7D%2C%7B%22key%22%3A%22Z8ICYQF5%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Qiu%20et%20al.%22%2C%22parsedDate%22%3A%222023%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BQiu%2C%20Q.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Flinkinghub.elsevier.com%5C%2Fretrieve%5C%2Fpii%5C%2FS0169136822005704%26%23039%3B%26gt%3BGeological%20symbol%20recognition%20on%20geological%20map%20using%20convolutional%20recurrent%20neural%20network%20with%20augmented%20data%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Geological%20symbol%20recognition%20on%20geological%20map%20using%20convolutional%20recurrent%20neural%20network%20with%20augmented%20data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qinjun%22%2C%22lastName%22%3A%22Qiu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yongjian%22%2C%22lastName%22%3A%22Tan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kai%22%2C%22lastName%22%3A%22Ma%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Miao%22%2C%22lastName%22%3A%22Tian%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhong%22%2C%22lastName%22%3A%22Xie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Liufeng%22%2C%22lastName%22%3A%22Tao%22%7D%5D%2C%22abstractNote%22%3A%22Geological%20maps%20contain%20rich%20geological%20knowledge%2C%20such%20as%20faults%2C%20structures%2C%20minerals%2C%20etc.%20Automatically%20and%20accurately%20recognition%20of%20geological%20symbols%20is%20the%20basis%20step%20for%20understanding%20geological%20maps%20and%20constructing%20geological%20knowledge%20connections%20between%20maps%20and%20texts.%20Due%20to%20the%20diverse%20combinations%20of%20symbols%2C%20complex%20background%20and%20color%20noise%20interference%2C%20symbol%20subscripts%20in%20geological%20maps%20directly%20affect%20the%20accurate%20recognition%20of%20geological%20symbols.%20In%20order%20to%20solve%20the%20above%20problems%2C%20this%20paper%20proposes%20a%20three-stages%20of%20the%20framework%20based%20on%20deep%20learning%20to%20recognize%20symbols%20in%20geological%20maps.%20The%20framework%20contains%20dataset%20automatic%20construction%2C%20convolutional%20recurrent%20neural%20network%20%28CRNN%29%20model%20training%2C%20and%20geo-symbol%20index%20construction.%20First%2C%20we%20propose%20a%20method%20to%20generate%20a%20base%20character-based%20training%20dataset%20that%20can%20generate%20geological%20map%20legend%20datasets%20of%20arbitrary%20length%20and%20different%20color%20backgrounds%3B%20second%2C%20we%20train%20a%20variable-length%20image%20text%20recognition%20optical%20character%20recognition%20%28OCR%29%20model%20CRNN%20and%20conduct%20comparative%20experiments%20to%20verify%20the%20effectiveness%20of%20our%20proposed%20recognition%20framework.%20Finally%2C%20the%20stage%20of%20geo-symbol%20index%20construction%20establishes%20the%20corresponding%20index%20list%20of%20geological%20symbols%20and%20corresponding%20descriptions%20for%20converting%20the%20recognized%20geological%20symbols%20into%20corresponding%20names%20and%20finally%20output%20the%20result.%20We%20performed%20experimental%20validation%20and%20analysis%20on%20our%20automatically%20generated%20dataset.%20The%20experimental%20results%20show%20that%20the%20accuracy%20of%20our%20algorithm%20recognition%20reaches%2094%25%2C%20which%20verifies%20the%20effectiveness%20of%20our%20proposed%20algorithm.%22%2C%22date%22%3A%2202%5C%2F2023%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.oregeorev.2022.105262%22%2C%22ISSN%22%3A%2201691368%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Flinkinghub.elsevier.com%5C%2Fretrieve%5C%2Fpii%5C%2FS0169136822005704%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-12T12%3A15%3A26Z%22%7D%7D%2C%7B%22key%22%3A%229YTEGU56%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Matidis%20et%20al.%22%2C%22parsedDate%22%3A%222023%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BMatidis%2C%20G.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-031-41734-4_22%26%23039%3B%26gt%3BDetecting%20Text%20on%20Historical%20Maps%20by%20Selecting%20Best%20Candidates%20of%20Deep%20Neural%20Networks%20Output%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Detecting%20Text%20on%20Historical%20Maps%20by%20Selecting%20Best%20Candidates%20of%20Deep%20Neural%20Networks%20Output%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gerasimos%22%2C%22lastName%22%3A%22Matidis%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Basilis%22%2C%22lastName%22%3A%22Gatos%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Anastasios%20L.%22%2C%22lastName%22%3A%22Kesidis%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Panagiotis%22%2C%22lastName%22%3A%22Kaddas%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Gernot%20A.%22%2C%22lastName%22%3A%22Fink%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Rajiv%22%2C%22lastName%22%3A%22Jain%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Koichi%22%2C%22lastName%22%3A%22Kise%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Richard%22%2C%22lastName%22%3A%22Zanibbi%22%7D%5D%2C%22abstractNote%22%3A%22The%20final%20and%20perhaps%20the%20most%20crucial%20step%20in%20Object%20Detection%20is%20the%20selection%20of%20the%20best%20candidates%20out%20of%20all%20the%20proposed%20regions%20a%20framework%20outputs.%20Typically%2C%20Non-Maximum%20Suppression%20approaches%20%28NMS%29%20are%20employed%20to%20tackle%20this%20problem.%20The%20standard%20NMS%20relies%20exclusively%20on%20the%20confidence%20scores%2C%20as%20it%20selects%20the%20bounding%20box%20with%20the%20highest%20score%20within%20a%20cluster%20of%20boxes%20determined%20by%20a%20relatively%20high%20Intersection%20over%20Union%20%28IoU%29%20between%20each%20other%2C%20and%20then%20suppresses%20the%20remaining%20ones.%20On%20the%20other%20hand%2C%20algorithms%20like%20Confluence%20determine%20clusters%20of%20bounding%20boxes%20according%20to%20the%20proximity%20between%20them%20and%20select%20as%20best%20the%20box%20that%20is%20closer%20to%20the%20other%20ones%20within%20each%20cluster.%20In%20this%20work%2C%20we%20combine%20these%20methods%20by%20creating%20clusters%20of%20high%20confidence%20scores%20according%20to%20their%20IoU%20and%20then%20we%20calculate%20the%20sums%20of%20the%20Manhattan%20distances%20between%20the%20vertices%20of%20each%20box%20and%20all%20the%20others%2C%20in%20order%20to%20finally%20select%20the%20one%20with%20the%20minimum%20overall%20distance.%20Our%20results%20are%20compared%20with%20the%20standard%20NMS%20and%20the%20Locality-Aware%20NMS%20%28LANMS%29%2C%20an%20algorithm%20that%20is%20widely%20used%20in%20Object%20Detection%20and%20merges%20the%20boxes%20row%20by%20row.%20The%20research%20field%20that%20this%20work%20explores%20is%20the%20text%20detection%20on%20historical%20maps%20and%20the%20proposed%20approach%20results%20to%20average%20precision%20that%20is%202.14%5Cu20132.94%25%20higher%20for%20evaluation%20IoU%20in%20range%200.50%20to%200.95%20with%20step%200.05%20than%20the%20two%20other%20methods.%22%2C%22date%22%3A%222023%22%2C%22proceedingsTitle%22%3A%22Document%20Analysis%20and%20Recognition%20-%20ICDAR%202023%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2F978-3-031-41734-4_22%22%2C%22ISBN%22%3A%22978-3-031-41734-4%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-031-41734-4_22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-21T13%3A42%3A41Z%22%7D%7D%2C%7B%22key%22%3A%22L2YITLTU%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Lenc%20et%20al.%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLenc%2C%20L.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-031-06555-2_12%26%23039%3B%26gt%3BHistorical%20Map%20Toponym%20Extraction%20for%20Efficient%20Information%20Retrieval%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Historical%20Map%20Toponym%20Extraction%20for%20Efficient%20Information%20Retrieval%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ladislav%22%2C%22lastName%22%3A%22Lenc%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ji%5Cu0159%5Cu00ed%22%2C%22lastName%22%3A%22Mart%5Cu00ednek%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Josef%22%2C%22lastName%22%3A%22Baloun%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Martin%22%2C%22lastName%22%3A%22Prantl%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pavel%22%2C%22lastName%22%3A%22Kr%5Cu00e1l%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Seiichi%22%2C%22lastName%22%3A%22Uchida%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Elisa%22%2C%22lastName%22%3A%22Barney%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22V%5Cu00e9ronique%22%2C%22lastName%22%3A%22Eglin%22%7D%5D%2C%22abstractNote%22%3A%22The%20paper%20deals%20with%20detection%2C%20classification%20and%20recognition%20of%20toponyms%20in%20hand-drawn%20historical%20cadastral%20maps.%20Toponyms%20are%20local%20names%20of%20towns%2C%20villages%20and%20landscape%20features%20such%20as%20rivers%2C%20forests%20etc.%20The%20detected%20and%20recognized%20toponyms%20are%20utilized%20as%20keywords%20in%20an%20information%20retrieval%20system%20that%20allows%20intelligent%20and%20efficient%20searching%20in%20historical%20map%20collections.%20We%20create%20a%20novel%20annotated%20dataset%20that%20is%20freely%20available%20for%20research%20and%20educational%20purposes.%20Then%2C%20we%20propose%20a%20novel%20approach%20for%20toponym%20classification%20based%20on%20KAZE%20descriptor.%20Next%20we%20compare%20and%20evaluate%20several%20state-of-the-art%20methods%20for%20text%20and%20object%20detection%20on%20our%20toponym%20detection%20task.%20We%20further%20show%20the%20results%20of%20toponym%20text%20recognition%20using%20popular%20Tesseract%20engine.%22%2C%22date%22%3A%222022%22%2C%22proceedingsTitle%22%3A%22Document%20Analysis%20Systems%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2F978-3-031-06555-2_12%22%2C%22ISBN%22%3A%22978-3-031-06555-2%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-031-06555-2_12%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-21T13%3A43%3A13Z%22%7D%7D%2C%7B%22key%22%3A%225KU6R9ZB%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Arundel%20et%20al.%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BArundel%2C%20S.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fpubs.er.usgs.gov%5C%2Fpublication%5C%2F70229393%26%23039%3B%26gt%3BDeep%20learning%20detection%20and%20recognition%20of%20spot%20elevations%20on%20historic%20topographic%20maps%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep%20learning%20detection%20and%20recognition%20of%20spot%20elevations%20on%20historic%20topographic%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Samantha%22%2C%22lastName%22%3A%22Arundel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Trenton%20P.%22%2C%22lastName%22%3A%22Morgan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Philip%20T.%22%2C%22lastName%22%3A%22Thiem%22%7D%5D%2C%22abstractNote%22%3A%22Some%20information%20contained%20in%20historical%20topographic%20maps%20has%20yet%20to%20be%20captured%20digitally%2C%20which%20limits%20the%20ability%20to%20automatically%20query%20such%20data.%20For%20example%2C%20U.S.%20Geological%20Survey%5Cu2019s%20historical%20topographic%20map%20collection%20%28HTMC%29%20displays%20millions%20of%20spot%20elevations%20at%20locations%20that%20were%20carefully%20chosen%20to%20best%20represent%20the%20terrain%20at%20the%20time.%20Although%20research%20has%20attempted%20to%20reproduce%20these%20data%20points%2C%20it%20has%20proven%20inadequate%20to%20automatically%20detect%20and%20recognize%20spot%20elevations%20in%20the%20HTMC.%20We%20propose%20a%20deep%20learning%20workflow%20pretrained%20using%20large%20benchmark%20text%20datasets.%20To%20these%20datasets%20we%20add%20manually%20crafted%20training%20image%5C%2Flabel%20pairs%2C%20and%20test%20how%20many%20are%20required%20to%20improve%20prediction%20accuracy.%20We%20find%20that%20the%20initial%20model%2C%20pretrained%20solely%20with%20benchmark%20data%2C%20fails%20to%20predict%20any%20HTMC%20spot%20elevations%20correctly%2C%20whereas%20the%20addition%20of%20just%2050%20custom%20image%5C%2Flabel%20pairs%20increases%20the%20predictive%20ability%20by%20~50%25%2C%20and%20the%20inclusion%20of%20350%20data%20pairs%20increased%20performance%20by...%22%2C%22date%22%3A%222022%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.3389%5C%2Ffenvs.2022.804155%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fpubs.er.usgs.gov%5C%2Fpublication%5C%2F70229393%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A58%3A18Z%22%7D%7D%2C%7B%22key%22%3A%22H32PDRGL%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Can%20and%20Erdem%20Kabadayi%22%2C%22parsedDate%22%3A%222021-09-05%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BCan%2C%20Y.S.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3476887.3476904%26%23039%3B%26gt%3BText%20Detection%20and%20Recognition%20by%20using%20CNNs%20in%20the%20Austro-Hungarian%20Historical%20Military%20Mapping%20Survey%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Text%20Detection%20and%20Recognition%20by%20using%20CNNs%20in%20the%20Austro-Hungarian%20Historical%20Military%20Mapping%20Survey%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yekta%20Said%22%2C%22lastName%22%3A%22Can%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mustafa%22%2C%22lastName%22%3A%22Erdem%20Kabadayi%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20maps%20include%20precious%20data%20about%20historical%2C%20geographical%20and%20economic%20perspectives%20of%20a%20period.%20However%2C%20several%20unique%20challenges%20and%20opportunities%20accompany%20historical%20maps%20compared%20to%20modern%20maps%2C%20such%20as%20low-quality%20images%2C%20degraded%20manuscripts%20and%20the%20huge%20quantity%20of%20non-annotated%20digital%20map%20collections.%20In%20the%20recent%20decade%2C%20Convolutional%20Neural%20Networks%20%28CNNs%29%20are%20applied%20to%20solve%20various%20image%20processing%20problems%2C%20but%20they%20need%20enormous%20annotated%20data%20to%20have%20accurate%20results.%20In%20this%20work%2C%20we%20annotated%20text%20regions%20of%20the%20Third%20Military%20Mapping%20Survey%20of%20Austria-Hungary%20historical%20map%20series%20conducted%20between%201884%20and%201918%20manually%20and%20made%20them%20accessible%20for%20researchers.%20Then%2C%20we%20detected%20the%20pixel-wise%20positions%20of%20text%20regions%20by%20employing%20the%20deep%20neural%20network%20architecture%20and%20recognized%20them%20with%20encouraging%20error%20rates.%22%2C%22date%22%3A%22September%205%2C%202021%22%2C%22proceedingsTitle%22%3A%22The%206th%20International%20Workshop%20on%20Historical%20Document%20Imaging%20and%20Processing%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3476887.3476904%22%2C%22ISBN%22%3A%22978-1-4503-8690-6%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3476887.3476904%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A42%3A36Z%22%7D%7D%2C%7B%22key%22%3A%22PC477AVJ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Garcia-Molsosa%20et%20al.%22%2C%22parsedDate%22%3A%222021%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGarcia-Molsosa%2C%20A.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1002%5C%2Farp.1807%26%23039%3B%26gt%3BPotential%20of%20deep%20learning%20segmentation%20for%20the%20extraction%20of%20archaeological%20features%20from%20historical%20map%20series%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Potential%20of%20deep%20learning%20segmentation%20for%20the%20extraction%20of%20archaeological%20features%20from%20historical%20map%20series%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Arnau%22%2C%22lastName%22%3A%22Garcia-Molsosa%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hector%20A.%22%2C%22lastName%22%3A%22Orengo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dan%22%2C%22lastName%22%3A%22Lawrence%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Graham%22%2C%22lastName%22%3A%22Philip%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kristen%22%2C%22lastName%22%3A%22Hopper%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cameron%20A.%22%2C%22lastName%22%3A%22Petrie%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20maps%20present%20a%20unique%20depiction%20of%20past%20landscapes%2C%20providing%20evidence%20for%20a%20wide%20range%20of%20information%20such%20as%20settlement%20distribution%2C%20past%20land%20use%2C%20natural%20resources%2C%20transport%20networks%2C%20toponymy%20and%20other%20natural%20and%20cultural%20data%20within%20an%20explicitly%20spatial%20context.%20Maps%20produced%20before%20the%20expansion%20of%20large-scale%20mechanized%20agriculture%20reflect%20a%20landscape%20that%20is%20lost%20today.%20Of%20particular%20interest%20to%20us%20is%20the%20great%20quantity%20of%20archaeologically%20relevant%20information%20that%20these%20maps%20recorded%2C%20both%20deliberately%20and%20incidentally.%20Despite%20the%20importance%20of%20the%20information%20they%20contain%2C%20researchers%20have%20only%20recently%20begun%20to%20automatically%20digitize%20and%20extract%20data%20from%20such%20maps%20as%20coherent%20information%2C%20rather%20than%20manually%20examine%20a%20raster%20image.%20However%2C%20these%20new%20approaches%20have%20focused%20on%20specific%20types%20of%20information%20that%20cannot%20be%20used%20directly%20for%20archaeological%20or%20heritage%20purposes.%20This%20paper%20provides%20a%20proof%20of%20concept%20of%20the%20application%20of%20deep%20learning%20techniques%20to%20extract%20archaeological%20information%20from%20historical%20maps%20in%20an%20automated%20manner.%20Early%20twentieth%20century%20colonial%20map%20series%20have%20been%20chosen%2C%20as%20they%20provide%20enough%20time%20depth%20to%20avoid%20many%20recent%20large-scale%20landscape%20modifications%20and%20cover%20very%20large%20areas%20%28comprising%20several%20countries%29.%20The%20use%20of%20common%20symbology%20and%20conventions%20enhance%20the%20applicability%20of%20the%20method.%20The%20results%20show%20deep%20learning%20to%20be%20an%20efficient%20tool%20for%20the%20recovery%20of%20georeferenced%2C%20archaeologically%20relevant%20information%20that%20is%20represented%20as%20conventional%20signs%2C%20line-drawings%20and%20text%20in%20historical%20maps.%20The%20method%20can%20provide%20excellent%20results%20when%20an%20adequate%20training%20dataset%20has%20been%20gathered%20and%20is%20therefore%20at%20its%20best%20when%20applied%20to%20the%20large%20map%20series%20that%20can%20supply%20such%20information.%20The%20deep%20learning%20approaches%20described%20here%20open%20up%20the%20possibility%20to%20map%20sites%20and%20features%20across%20entire%20map%20series%20much%20more%20quickly%20and%20coherently%20than%20other%20available%20methods%2C%20opening%20up%20the%20potential%20to%20reconstruct%20archaeological%20landscapes%20at%20continental%20scales.%22%2C%22date%22%3A%222021%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1002%5C%2Farp.1807%22%2C%22ISSN%22%3A%221099-0763%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1002%5C%2Farp.1807%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A57%3A19Z%22%7D%7D%2C%7B%22key%22%3A%22H4S5JNQ7%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Weinman%20et%20al.%22%2C%22parsedDate%22%3A%222019-09%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWeinman%2C%20J.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F8978121%26%23039%3B%26gt%3BDeep%20Neural%20Networks%20for%20Text%20Detection%20and%20Recognition%20in%20Historical%20Maps%26lt%3B%5C%2Fa%26gt%3B.%202019%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Deep%20Neural%20Networks%20for%20Text%20Detection%20and%20Recognition%20in%20Historical%20Maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jerod%22%2C%22lastName%22%3A%22Weinman%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ziwen%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ben%22%2C%22lastName%22%3A%22Gafford%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nathan%22%2C%22lastName%22%3A%22Gifford%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Abyaya%22%2C%22lastName%22%3A%22Lamsal%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Liam%22%2C%22lastName%22%3A%22Niehus-Staab%22%7D%5D%2C%22abstractNote%22%3A%22We%20introduce%20deep%20convolutional%20and%20recurrent%20neural%20networks%20for%20end-to-end%2C%20open-vocabulary%20text%20reading%20on%20historical%20maps.%20A%20text%20detection%20network%20predicts%20word%20bounding%20boxes%20at%20arbitrary%20orientations%20and%20scales.%20The%20detected%20word%20images%20are%20then%20normalized%20for%20a%20robust%20recognition%20network.%20Because%20accurate%20recognition%20requires%20large%20volumes%20of%20training%20data%20but%20manually%20labeled%20data%20is%20relatively%20scarce%2C%20we%20introduce%20a%20dynamic%20map%20text%20synthesizer%20providing%20a%20practically%20infinite%20stream%20of%20training%20data.%20Results%20are%20evaluated%20on%20a%20labeled%20data%20set%20of%2030%20maps%20featuring%20over%2030%2C000%20text%20labels.%22%2C%22date%22%3A%222019-09%22%2C%22proceedingsTitle%22%3A%222019%20International%20Conference%20on%20Document%20Analysis%20and%20Recognition%20%28ICDAR%29%22%2C%22conferenceName%22%3A%222019%20International%20Conference%20on%20Document%20Analysis%20and%20Recognition%20%28ICDAR%29%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FICDAR.2019.00149%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F8978121%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A16%3A58Z%22%7D%7D%5D%7D
Lin, Y. et al. LIGHT: Multi-modal Text Linking on Historical Maps. 2026
Zou, M. et al. Recognizing and Sequencing Multi-word Texts in Maps Using an Attentive Pointer. 2025
Pradhan, A. et al. An Effort Toward Localization and Recognition of Elevation Values in a Topographic Sheet. 2025
Lin, Y. et al. Hyper-Local Deformable Transformers for Text Spotting on Historical Maps. 2024
Zhai, Y. et al. Model and Data Integrated Transfer Learning for Unstructured Map Text Detection. 2023
Matidis, G. et al. Detecting Text on Historical Maps by Selecting Best Candidates of Deep Neural Networks Output. 2023
Lenc, L. et al. Historical Map Toponym Extraction for Efficient Information Retrieval. 2022
Arundel, S. et al. Deep learning detection and recognition of spot elevations on historic topographic maps. 2022
Garcia-Molsosa, A. et al. Potential of deep learning segmentation for the extraction of archaeological features from historical map series. 2021
Weinman, J. et al. Deep Neural Networks for Text Detection and Recognition in Historical Maps. 2019
Feature Extraction (Fuzzy Elements)
5447768
feature extraction, fuzzy elements
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%224SB5PMF8%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wang%20et%20al.%22%2C%22parsedDate%22%3A%222025-08-03%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWang%2C%20J.-H.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2025.2473568%26%23039%3B%26gt%3BUnsupervised%20domain%20adaptation%20for%20cross-style%2C%20cross-year%20land%20use%20understanding%20from%20historical%20maps%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Unsupervised%20domain%20adaptation%20for%20cross-style%2C%20cross-year%20land%20use%20understanding%20from%20historical%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jun-Hua%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Andy%20Da-Yu%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hsiung-Ming%22%2C%22lastName%22%3A%22Liao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ming-Ching%22%2C%22lastName%22%3A%22Chang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Richard%20Tzong-Han%22%2C%22lastName%22%3A%22Tsai%22%7D%5D%2C%22abstractNote%22%3A%22Digitizing%20historical%20topographic%20maps%20is%20essential%20for%20spatial%20analysis%20in%20GIS%3B%20however%2C%20conventional%20methods%20for%20digitizing%20these%20maps%20are%20labor-intensive%20and%20challenging%20due%20to%20non-explicit%20boundaries%20and%20inconsistent%20map%20styles.%20We%20address%20these%20challenges%20by%20proposing%20a%20new%20Map%20Style%20Segmentation%20%28MapStyleSeg%29%20method%20that%20employs%20unsupervised%20domain%20adaptation%20%28UDA%29%20from%20deep%20learning%20%28DL%29%20to%20enhance%20cross-style%2C%20cross-year%20automatic%20map%20segmentation%20and%20conversion.%20Our%20method%2C%20MapStyleSeg%2C%20is%20exemplified%20by%20training%20on%20a%20fully%20annotated%20topographic%20map%20of%20Taiwan%20in%202017%20and%20applying%20it%20to%20a%202001%20topographic%20map%20without%20annotations.%20We%20also%20evaluated%20different%20encoder-decoder%20architectures%20and%20loss%20functions.%20Our%20results%20show%20that%20using%20the%20ResNet-101%20backbone%20with%20the%20SegFormer%20decoder%20and%20a%20mix%20of%20focal%20and%20Dice%20loss%20yields%20the%20best%20performance%3A%2094.94%25%20overall%20accuracy%20%28Acc%29%2C%2081.8%25%20mean%20Intersection%20over%20Union%20%28mIoU%29%2C%20outperforming%20standard%20U-Net%20models%20without%20UDA%20%2888.23%25%20Acc%2C%2049.3%25%20mIoU%29.%20Our%20approach%20addresses%20the%20challenges%20of%20digitizing%20historical%20maps%20with%20varying%20styles%2C%20further%20advancing%20GIS%20digitization%20of%20historical%20maps%2C%20and%20offering%20useful%20information%20for%20urban%20planning%2C%20environmental%20monitoring%2C%20and%20decision-making%20processes.%20This%20work%20highlights%20the%20novel%20use%20of%20DL%20algorithms%20to%20automate%20complex%20GIS%20data%20processing%20that%20transforms%20historical%20maps%20into%20spatial%20datasets.%22%2C%22date%22%3A%222025-08-03%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2025.2473568%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2025.2473568%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-29T20%3A33%3A58Z%22%7D%7D%2C%7B%22key%22%3A%22A5ZIRRI8%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Litvine%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLitvine%2C%20A.D.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F01615440.2024.2324163%26%23039%3B%26gt%3BBuilt-up%20areas%20of%20nineteenth-century%20Britain.%20An%20integrated%20methodology%20for%20extracting%20high-resolution%20urban%20footprints%20from%20historical%20maps%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Built-up%20areas%20of%20nineteenth-century%20Britain.%20An%20integrated%20methodology%20for%20extracting%20high-resolution%20urban%20footprints%20from%20historical%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Alexis%20D.%22%2C%22lastName%22%3A%22Litvine%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Arthur%22%2C%22lastName%22%3A%22Starzec%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rehmana%22%2C%22lastName%22%3A%22Younis%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yannick%22%2C%22lastName%22%3A%22Faula%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Micka%5Cu00ebl%22%2C%22lastName%22%3A%22Coustaty%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Leigh%22%2C%22lastName%22%3A%22Shaw-Taylor%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22V%5Cu00e9ronique%22%2C%22lastName%22%3A%22%5Cu00c9glin%22%7D%5D%2C%22abstractNote%22%3A%22Using%20both%20%5Cu201coff%20the%20shelf%5Cu201d%20remote%20sensing%20software%2C%20machine%20learning%20and%20computational%20algorithms%2C%20this%20article%20details%20a%20new%20methodology%20to%20extract%20building%20and%20urban%20footprints%20from%20historical%20maps.%20We%20applied%20these%20methods%20to%20create%20the%20first%20dataset%20of%20all%20built-up%20areas%20%28BUA%29%20in%20Britain%20in%20the%20early%20nineteenth%20century%2C%20covering%20all%20locations%20with%20buildings%20in%20England%2C%20Wales%2C%20and%20Scotland.%20The%20developed%20methods%20can%20now%20be%20applied%20to%20other%20maps%20and%20regions%20to%20provide%20useful%20quantitative%20data%20for%20analyzing%20long-term%20urban%20development.%20The%20code%20and%20data%20created%20are%20made%20available%20with%20this%20article.%22%2C%22date%22%3A%222024-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F01615440.2024.2324163%22%2C%22ISSN%22%3A%220161-5440%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F01615440.2024.2324163%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-26T19%3A31%3A33Z%22%7D%7D%2C%7B%22key%22%3A%22Q6P3SETY%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22O%27Hara%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BO%26%23039%3BHara%2C%20R.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1470160X23015054%26%23039%3B%26gt%3BUnleashing%20the%20power%20of%20old%20maps%3A%20Extracting%20symbology%20from%20nineteenth%20century%20maps%20using%20convolutional%20neural%20networks%20to%20quantify%20modern%20land%20use%20on%20historic%20wetlands%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Unleashing%20the%20power%20of%20old%20maps%3A%20Extracting%20symbology%20from%20nineteenth%20century%20maps%20using%20convolutional%20neural%20networks%20to%20quantify%20modern%20land%20use%20on%20historic%20wetlands%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rob%22%2C%22lastName%22%3A%22O%27Hara%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Richa%22%2C%22lastName%22%3A%22Marwaha%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jesko%22%2C%22lastName%22%3A%22Zimmermann%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Matthew%22%2C%22lastName%22%3A%22Saunders%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stuart%22%2C%22lastName%22%3A%22Green%22%7D%5D%2C%22abstractNote%22%3A%22Topographical%20maps%20from%20the%20nineteenth%20century%20hold%20significant%20historical%20and%20environmental%20value%2C%20providing%20insights%20into%20landscape%20changes%20over%20the%20past%20two%20centuries.%20These%20maps%20feature%20distinct%20symbols%20representing%20various%20land%20cover%20types%2C%20such%20as%20forests%20and%20wetlands%2C%20offering%20a%20unique%20historical%20perspective%20on%20land-use%20changes.%20For%20example%2C%20there%20has%20been%20a%20significant%20reduction%20in%20wetlands%20because%20of%20agricultural%20expansion%20and%20intensification%20which%20lead%20to%20biodiversity%20loss%20and%20increased%20greenhouse%20gas%20emissions%20globally.%20Our%20study%20uses%20U-net%20CNN%20to%20automatically%20segment%20wetland%20symbols%20from%20nineteenth%20century%20maps%20from%20hundreds%20of%20map%20sheets%20for%20an%20area%20of%20interest%20a%20large%20river%20catchment%20in%20Ireland.%20Extracted%20wetland%20extents%20were%20intersected%20with%20digital%20land%20cover%20datasets%20to%20estimate%20current%20land%20cover%20on%20former%20wetland%20%28on%20both%20organic%20and%20mineral%20soils%29.%20Utilizing%20U-Net%2C%20we%20successfully%20automated%20the%20segmentation%20of%20wetland%20symbols%20from%20hundreds%20of%20nineteenth-century%20map%20sheets%2C%20focusing%20on%20a%20large%20river%20catchment%20area%20in%20Ireland.%20Our%20analysis%20achieved%20a%20very%20high%20F1%20score%20of%2098.2%25%20and%20a%20Kappa%20of%2089%25.%20While%20it%20is%20challenging%20to%20verify%20the%20veracity%20of%20historical%20map%20content%2C%20the%20largely%20untapped%20information%20contained%20within%20these%20maps%20are%20important%20for%20understanding%20landscape%20change%20over%20time%2C%20and%20especially%20before%20the%20era%20of%20Earth%20observation%20%26amp%3B%20remote%20sensing.%20The%20data%20extracted%20from%20these%20sources%20can%20inform%20modern%20environmental%20management%20strategies%2C%20for%20example%2C%20in%20targeted%20rewetting%20of%20peatlands%2C%20or%20in%20habitat%20restoration.%22%2C%22date%22%3A%222024-01-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.ecolind.2023.111363%22%2C%22ISSN%22%3A%221470-160X%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1470160X23015054%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-28T18%3A56%3A33Z%22%7D%7D%2C%7B%22key%22%3A%22UXB64TSA%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Vynikal%20et%20al.%22%2C%22parsedDate%22%3A%222024%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BVynikal%2C%20J.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1111%5C%2Ftgis.13193%26%23039%3B%26gt%3BDeep%20learning%20approaches%20for%20delineating%20wetlands%20on%20historical%20topographic%20maps%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep%20learning%20approaches%20for%20delineating%20wetlands%20on%20historical%20topographic%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jakub%22%2C%22lastName%22%3A%22Vynikal%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jana%22%2C%22lastName%22%3A%22M%5Cu00fcllerov%5Cu00e1%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jan%22%2C%22lastName%22%3A%22Pacina%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20topographic%20maps%20are%20an%20important%20source%20of%20a%20visual%20record%20of%20the%20landscape%2C%20showing%20geographical%20elements%20such%20as%20terrain%2C%20elevation%2C%20rivers%20and%20water%20bodies%2C%20roads%2C%20buildings%2C%20and%20land%20use%20and%20land%20cover%20%28LULC%29.%20Historical%20maps%20are%20scanned%20to%20their%20digital%20representation%2C%20a%20raster%20image.%20To%20quantify%20different%20classes%20of%20LULC%2C%20it%20is%20necessary%20to%20transform%20scanned%20maps%20to%20their%20vector%20equivalent.%20Traditionally%2C%20this%20has%20been%20done%20either%20manually%2C%20or%20by%20using%20%28semi%29automatic%20methods%20of%20clustering%5C%2Fsegmentation.%20With%20the%20advent%20of%20deep%20neural%20networks%2C%20new%20horizons%20opened%20for%20more%20effective%20and%20accurate%20processing.%20This%20article%20attempts%20to%20use%20different%20deep-learning%20approaches%20to%20detect%20and%20segment%20wetlands%20on%20historical%20Topographic%20Maps%201%3A%2010000%20%28TM10%29%2C%20created%20during%20the%2050s%20and%2060s.%20Due%20to%20the%20specific%20symbology%20of%20wetlands%2C%20their%20processing%20can%20be%20challenging.%20It%20deals%20with%20two%20distinct%20approaches%20in%20the%20deep%20learning%20world%2C%20semantic%20segmentation%20and%20object%20detection%2C%20represented%20by%20the%20U-Net%20and%20Single-Shot%20Detector%20%28SSD%29%20neural%20networks%2C%20respectively.%20The%20suitability%2C%20speed%2C%20and%20accuracy%20of%20the%20two%20approaches%20in%20neural%20networks%20are%20analyzed.%20The%20results%20are%20satisfactory%2C%20with%20the%20U-Net%20F1%20score%20reaching%2075.7%25%20and%20the%20SSD%20object%20detection%20approach%20presenting%20an%20unconventional%20alternative.%22%2C%22date%22%3A%222024%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1111%5C%2Ftgis.13193%22%2C%22ISSN%22%3A%221467-9671%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1111%5C%2Ftgis.13193%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-03T17%3A12%3A49Z%22%7D%7D%2C%7B%22key%22%3A%22R7SC5FRL%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Hosseini%20et%20al.%22%2C%22parsedDate%22%3A%222022-11-11%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BHosseini%2C%20K.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3557919.3565812%26%23039%3B%26gt%3BMapReader%3A%20a%20computer%20vision%20pipeline%20for%20the%20semantic%20exploration%20of%20maps%20at%20scale%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22MapReader%3A%20a%20computer%20vision%20pipeline%20for%20the%20semantic%20exploration%20of%20maps%20at%20scale%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kasra%22%2C%22lastName%22%3A%22Hosseini%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Daniel%20C.%20S.%22%2C%22lastName%22%3A%22Wilson%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kaspar%22%2C%22lastName%22%3A%22Beelen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Katherine%22%2C%22lastName%22%3A%22McDonough%22%7D%5D%2C%22abstractNote%22%3A%22We%20present%20MapReader%2C%20a%20free%2C%20open-source%20software%20library%20written%20in%20Python%20for%20analyzing%20large%20map%20collections.%20MapReader%20allows%20users%20with%20little%20computer%20vision%20expertise%20to%20i%29%20retrieve%20maps%20via%20web-servers%3B%20ii%29%20preprocess%20and%20divide%20them%20into%20patches%3B%20iii%29%20annotate%20patches%3B%20iv%29%20train%2C%20fine-tune%2C%20and%20evaluate%20deep%20neural%20network%20models%3B%20and%20v%29%20create%20structured%20data%20about%20map%20content.%20We%20demonstrate%20how%20MapReader%20enables%20historians%20to%20interpret%20a%20collection%20of%20%5Cu224816K%20nineteenth-century%20maps%20of%20Britain%20%28%5Cu224830.5M%20patches%29%2C%20foregrounding%20the%20challenge%20of%20translating%20visual%20markers%20into%20machine-readable%20data.%20We%20present%20a%20case%20study%20focusing%20on%20rail%20and%20buildings.%20We%20also%20show%20how%20the%20outputs%20from%20the%20MapReader%20pipeline%20can%20be%20linked%20to%20other%2C%20external%20datasets.%20We%20release%20%5Cu224862K%20manually%20annotated%20patches%20used%20here%20for%20training%20and%20evaluating%20the%20models.%22%2C%22date%22%3A%22November%2011%2C%202022%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%206th%20ACM%20SIGSPATIAL%20International%20Workshop%20on%20Geospatial%20Humanities%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3557919.3565812%22%2C%22ISBN%22%3A%22978-1-4503-9533-5%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3557919.3565812%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-06-30T14%3A26%3A40Z%22%7D%7D%2C%7B%22key%22%3A%22MGQK4KHF%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wu%20et%20al.%22%2C%22parsedDate%22%3A%222022-06-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWu%2C%20S.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.int-arch-photogramm-remote-sens-spatial-inf-sci.net%5C%2FXLIII-B4-2022%5C%2F189%5C%2F2022%5C%2F%26%23039%3B%26gt%3BA%20Closer%20Look%20at%20Segmentation%20Uncertainty%20of%20Scanned%20Historical%20Maps%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22A%20Closer%20Look%20at%20Segmentation%20Uncertainty%20of%20Scanned%20Historical%20Maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22S.%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22M.%22%2C%22lastName%22%3A%22Heitzler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22L.%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22Before%20modern%20earth%20observation%20techniques%20came%20into%20being%2C%20historical%20maps%20are%20almost%20the%20exclusive%20source%20to%20retrieve%20geo-spatial%20information%20on%20Earth.%20In%20recent%20years%2C%20the%20use%20of%20deep%20learning%20for%20historical%20map%20processing%20has%20gained%20popularity%20to%20replace%20tedious%20manual%20labor.%20However%2C%20neural%20networks%2C%20often%20referred%20to%20as%20%5Cu201cblack%20boxes%5Cu201d%2C%20usually%20generate%20predictions%20not%20well%20calibrated%20for%20indicating%20if%20the%20predictions%20are%20trustworthy.%20Considering%20the%20diversity%20in%20designs%20and%20the%20graphic%20defects%20of%20scanned%20historical%20maps%2C%20uncertainty%20estimates%20can%20benefit%20us%20in%20deciding%20when%20and%20how%20to%20trust%20the%20extracted%20information.%20In%20this%20paper%2C%20we%20compare%20the%20effectiveness%20of%20different%20uncertainty%20indicators%20for%20segmenting%20hydrological%20features%20from%20scanned%20historical%20maps.%20Those%20uncertainty%20indicators%20can%20be%20categorized%20into%20two%20major%20types%2C%20namely%20aleatoric%20uncertainty%20%28uncertainty%20in%20the%20observations%29%20and%20epistemic%20uncertainty%20%28uncertainty%20in%20the%20model%29.%20Specifically%2C%20we%20compare%20their%20effectiveness%20in%20indicating%20erroneous%20predictions%2C%20detecting%20noisy%20and%20out-of-distribution%20designs%2C%20and%20refining%20segmentation%20in%20a%20two-stage%20architecture.%22%2C%22date%22%3A%222022%5C%2F06%5C%2F01%22%2C%22proceedingsTitle%22%3A%22The%20International%20Archives%20of%20the%20Photogrammetry%2C%20Remote%20Sensing%20and%20Spatial%20Information%20Sciences%22%2C%22conferenceName%22%3A%22XXIV%20ISPRS%20Congress%20%5Cu201cImaging%20today%2C%20foreseeing%20tomorrow%5Cu201d%2C%20Commission%20IV%20-%202022%20edition%2C%206%26ndash%3B11%20June%202022%2C%20Nice%2C%20France%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.5194%5C%2Fisprs-archives-XLIII-B4-2022-189-2022%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.int-arch-photogramm-remote-sens-spatial-inf-sci.net%5C%2FXLIII-B4-2022%5C%2F189%5C%2F2022%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A09%3A01Z%22%7D%7D%2C%7B%22key%22%3A%22JAAXMZRV%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22St%5Cu00e5hl%20and%20Weimann%22%2C%22parsedDate%22%3A%222022-05-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSt%5Cu00e5hl%2C%20N.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1574954122000061%26%23039%3B%26gt%3BIdentifying%20wetland%20areas%20in%20historical%20maps%20using%20deep%20convolutional%20neural%20networks%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Identifying%20wetland%20areas%20in%20historical%20maps%20using%20deep%20convolutional%20neural%20networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Niclas%22%2C%22lastName%22%3A%22St%5Cu00e5hl%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lisa%22%2C%22lastName%22%3A%22Weimann%22%7D%5D%2C%22abstractNote%22%3A%22The%20local%20environment%20and%20land%20usages%20have%20changed%20a%20lot%20during%20the%20past%20one%20hundred%20years.%20Historical%20documents%20and%20materials%20are%20crucial%20in%20understanding%20and%20following%20these%20changes.%20Historical%20documents%20are%2C%20therefore%2C%20an%20important%20piece%20in%20the%20understanding%20of%20the%20impact%20and%20consequences%20of%20land%20usage%20change.%20This%2C%20in%20turn%2C%20is%20important%20in%20the%20search%20of%20restoration%20projects%20that%20can%20be%20conducted%20to%20turn%20and%20reduce%20harmful%20and%20unsustainable%20effects%20originating%20from%20changes%20in%20the%20land-usage.%20This%20work%20extracts%20information%20on%20the%20historical%20location%20and%20geographical%20distribution%20of%20wetlands%2C%20from%20hand-drawn%20maps.%20This%20is%20achieved%20by%20using%20deep%20learning%20%28DL%29%2C%20and%20more%20specifically%20a%20convolutional%20neural%20network%20%28CNN%29.%20The%20CNN%20model%20is%20trained%20on%20a%20manually%20pre-labelled%20dataset%20on%20historical%20wetlands%20in%20the%20area%20of%20J%5Cu00f6nk%5Cu00f6ping%20county%20in%20Sweden.%20These%20are%20all%20extracted%20from%20the%20historical%20map%20called%20%5Cu201cGeneralstabskartan%5Cu201d.%20The%20presented%20CNN%20performs%20well%20and%20achieves%20a%20F1-score%20of%200.886%20when%20evaluated%20using%20a%2010-fold%20cross%20validation%20over%20the%20data.%20The%20trained%20models%20are%20additionally%20used%20to%20generate%20a%20GIS%20layer%20of%20the%20presumable%20historical%20geographical%20distribution%20of%20wetlands%20for%20the%20area%20that%20is%20depicted%20in%20the%20southern%20collection%20in%20Generalstabskartan%2C%20which%20covers%20the%20southern%20half%20of%20Sweden.%20This%20GIS%20layer%20is%20released%20as%20an%20open%20resource%20and%20can%20be%20freely%20used.%20To%20summarise%2C%20the%20presented%20results%20show%20that%20CNNs%20can%20be%20a%20useful%20tool%20in%20the%20extraction%20and%20digitalisation%20of%20non-textual%20information%20in%20historical%20documents%2C%20such%20as%20historical%20maps.%20A%20modern%20GIS%20material%20that%20can%20be%20used%20to%20further%20understand%20the%20past%20land-usage%20change%20is%20produced%20within%20this%20research.%20Previously%2C%20no%20material%20of%20this%20detail%20and%20extent%20have%20been%20available%2C%20due%20to%20the%20large%20effort%20needed%20to%20manually%20create%20such.%20However%2C%20with%20the%20presented%20resource%20better%20quantifications%20and%20estimations%20of%20historical%20wetlands%20that%20have%20been%20lost%20can%20be%20made.%22%2C%22date%22%3A%222022-05-01%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.ecoinf.2022.101557%22%2C%22ISSN%22%3A%221574-9541%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1574954122000061%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A41%3A15Z%22%7D%7D%2C%7B%22key%22%3A%22FPUTSDYJ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Uhl%20et%20al.%22%2C%22parsedDate%22%3A%222020%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BUhl%2C%20J.H.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F8946322%26%23039%3B%26gt%3BAutomated%20Extraction%20of%20Human%20Settlement%20Patterns%20From%20Historical%20Topographic%20Map%20Series%20Using%20Weakly%20Supervised%20Convolutional%20Neural%20Networks%26lt%3B%5C%2Fa%26gt%3B.%202020%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Automated%20Extraction%20of%20Human%20Settlement%20Patterns%20From%20Historical%20Topographic%20Map%20Series%20Using%20Weakly%20Supervised%20Convolutional%20Neural%20Networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Johannes%20H.%22%2C%22lastName%22%3A%22Uhl%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stefan%22%2C%22lastName%22%3A%22Leyk%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yao-Yi%22%2C%22lastName%22%3A%22Chiang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weiwei%22%2C%22lastName%22%3A%22Duan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Craig%20A.%22%2C%22lastName%22%3A%22Knoblock%22%7D%5D%2C%22abstractNote%22%3A%22Information%20extraction%20from%20historical%20maps%20represents%20a%20persistent%20challenge%20due%20to%20inferior%20graphical%20quality%20and%20the%20large%20data%20volume%20of%20digital%20map%20archives%2C%20which%20can%20hold%20thousands%20of%20digitized%20map%20sheets.%20Traditional%20map%20processing%20techniques%20typically%20rely%20on%20manually%20collected%20templates%20of%20the%20symbol%20of%20interest%2C%20and%20thus%20are%20not%20suitable%20for%20large-scale%20information%20extraction.%20In%20order%20to%20digitally%20preserve%20such%20large%20amounts%20of%20valuable%20retrospective%20geographic%20information%2C%20high%20levels%20of%20automation%20are%20required.%20Herein%2C%20we%20propose%20an%20automated%20machine-learning%20based%20framework%20to%20extract%20human%20settlement%20symbols%2C%20such%20as%20buildings%20and%20urban%20areas%20from%20historical%20topographic%20maps%20in%20the%20absence%20of%20training%20data%2C%20employing%20contemporary%20geospatial%20data%20as%20ancillary%20data%20to%20guide%20the%20collection%20of%20training%20samples.%20These%20samples%20are%20then%20used%20to%20train%20a%20convolutional%20neural%20network%20for%20semantic%20image%20segmentation%2C%20allowing%20for%20the%20extraction%20of%20human%20settlement%20patterns%20in%20an%20analysis-ready%20geospatial%20vector%20data%20format.%20We%20test%20our%20method%20on%20United%20States%20Geological%20Survey%20historical%20topographic%20maps%20published%20between%201893%20and%201954.%20The%20results%20are%20promising%2C%20indicating%20high%20degrees%20of%20completeness%20in%20the%20extracted%20settlement%20features%20%28i.e.%2C%20recall%20of%20up%20to%200.96%2C%20F-measure%20of%20up%20to%200.79%29%20and%20will%20guide%20the%20next%20steps%20to%20provide%20a%20fully%20automated%20operational%20approach%20for%20large-scale%20geographic%20feature%20extraction%20from%20a%20variety%20of%20historical%20map%20series.%20Moreover%2C%20the%20proposed%20framework%20provides%20a%20robust%20approach%20for%20the%20recognition%20of%20objects%20which%20are%20small%20in%20size%2C%20generalizable%20to%20many%20kinds%20of%20visual%20documents.%22%2C%22date%22%3A%222020%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FACCESS.2019.2963213%22%2C%22ISSN%22%3A%222169-3536%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F8946322%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A17%3A26Z%22%7D%7D%2C%7B%22key%22%3A%22JAJJMSS5%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Uhl%20et%20al.%22%2C%22parsedDate%22%3A%222018%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BUhl%2C%20J.H.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1049%5C%2Fiet-ipr.2018.5484%26%23039%3B%26gt%3BSpatialising%20uncertainty%20in%20image%20segmentation%20using%20weakly%20supervised%20convolutional%20neural%20networks%3A%20a%20case%20study%20from%20historical%20map%20processing%26lt%3B%5C%2Fa%26gt%3B.%202018%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Spatialising%20uncertainty%20in%20image%20segmentation%20using%20weakly%20supervised%20convolutional%20neural%20networks%3A%20a%20case%20study%20from%20historical%20map%20processing%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Johannes%20H.%22%2C%22lastName%22%3A%22Uhl%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stefan%22%2C%22lastName%22%3A%22Leyk%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yao-Yi%22%2C%22lastName%22%3A%22Chiang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weiwei%22%2C%22lastName%22%3A%22Duan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Craig%20A.%22%2C%22lastName%22%3A%22Knoblock%22%7D%5D%2C%22abstractNote%22%3A%22Convolutional%20neural%20networks%20%28CNNs%29%20such%20as%20encoder%5Cu2013decoder%20CNNs%20have%20increasingly%20been%20employed%20for%20semantic%20image%20segmentation%20at%20the%20pixel-level%20requiring%20pixel-level%20training%20labels%2C%20which%20are%20rarely%20available%20in%20real-world%20scenarios.%20In%20practice%2C%20weakly%20annotated%20training%20data%20at%20the%20image%20patch%20level%20are%20often%20used%20for%20pixel-level%20segmentation%20tasks%2C%20requiring%20further%20processing%20to%20obtain%20accurate%20results%2C%20mainly%20because%20the%20translation%20invariance%20of%20the%20CNN-based%20inference%20can%20turn%20into%20an%20impeding%20property%20leading%20to%20segmentation%20results%20of%20coarser%20spatial%20granularity%20compared%20with%20the%20original%20image.%20However%2C%20the%20inherent%20uncertainty%20in%20the%20segmented%20image%20and%20its%20relationships%20to%20translation%20invariance%2C%20CNN%20architecture%2C%20and%20classification%20scheme%20has%20never%20been%20analysed%20from%20an%20explicitly%20spatial%20perspective.%20Therefore%2C%20the%20authors%20propose%20measures%20to%20spatially%20visualise%20and%20assess%20class%20decision%20confidence%20based%20on%20spatially%20dense%20CNN%20predictions%2C%20resulting%20in%20continuous%20decision%20confidence%20surfaces.%20They%20find%20that%20such%20a%20visual-analytical%20method%20contributes%20to%20a%20better%20understanding%20of%20the%20spatial%20variability%20of%20class%20score%20confidence%20derived%20from%20weakly%20supervised%20CNN-based%20classifiers.%20They%20exemplify%20this%20approach%20by%20incorporating%20decision%20confidence%20surfaces%20into%20a%20processing%20chain%20for%20the%20extraction%20of%20human%20settlement%20features%20from%20historical%20map%20documents%20based%20on%20weakly%20annotated%20training%20data%20using%20different%20CNN%20architectures%20and%20classification%20schemes.%22%2C%22date%22%3A%222018%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1049%5C%2Fiet-ipr.2018.5484%22%2C%22ISSN%22%3A%221751-9667%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1049%5C%2Fiet-ipr.2018.5484%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A06%3A56Z%22%7D%7D%2C%7B%22key%22%3A%22FCVGC5FC%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Uhl%20et%20al.%22%2C%22parsedDate%22%3A%222017-07%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BUhl%2C%20J.H.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F8362084%26%23039%3B%26gt%3BExtracting%20human%20settlement%20footprint%20from%20historical%20topographic%20map%20series%20using%20context-based%20machine%20learning%26lt%3B%5C%2Fa%26gt%3B.%202017%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Extracting%20human%20settlement%20footprint%20from%20historical%20topographic%20map%20series%20using%20context-based%20machine%20learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Johannes%20H.%22%2C%22lastName%22%3A%22Uhl%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stefan%22%2C%22lastName%22%3A%22Leyk%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yao-Yi%22%2C%22lastName%22%3A%22Chiang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weiwei%22%2C%22lastName%22%3A%22Duan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Craig%20A.%22%2C%22lastName%22%3A%22Knoblock%22%7D%5D%2C%22abstractNote%22%3A%22Information%20extraction%20from%20historical%20maps%20represents%20a%20persistent%20challenge%20due%20to%20inferior%20graphical%20quality%20and%20large%20data%20volume%20in%20digital%20map%20archives%2C%20which%20can%20hold%20thousands%20of%20digitized%20map%20sheets.%20In%20this%20paper%2C%20we%20describe%20an%20approach%20to%20extract%20human%20settlement%20symbols%20in%20United%20States%20Geological%20Survey%20%28USGS%29%20historical%20topographic%20maps%20using%20contemporary%20building%20data%20as%20the%20contextual%20spatial%20layer.%20The%20presence%20of%20a%20building%20in%20the%20contemporary%20layer%20indicates%20a%20high%20probability%20that%20the%20same%20building%20can%20be%20found%20at%20that%20location%20on%20the%20historical%20map.%20We%20describe%20the%20design%20of%20an%20automatic%20sampling%20approach%20using%20these%20contemporary%20data%20to%20collect%20thousands%20of%20graphical%20examples%20for%20the%20symbol%20of%20interest.%20These%20graphical%20examples%20are%20then%20used%20for%20robust%20learning%20to%20then%20carry%20out%20feature%20extraction%20in%20the%20entire%20map.%20We%20employ%20a%20Convolutional%20Neural%20Network%20%28LeNet%29%20for%20the%20recognition%20task.%20Results%20are%20promising%20and%20will%20guide%20the%20next%20steps%20in%20this%20research%20to%20provide%20an%20unsupervised%20approach%20to%20extracting%20features%20from%20historical%20maps.%22%2C%22date%22%3A%222017-07%22%2C%22proceedingsTitle%22%3A%228th%20International%20Conference%20of%20Pattern%20Recognition%20Systems%20%28ICPRS%202017%29%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1049%5C%2Fcp.2017.0144%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F8362084%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A17%3A55Z%22%7D%7D%5D%7D
Vynikal, J. et al. Deep learning approaches for delineating wetlands on historical topographic maps. 2024
Hosseini, K. et al. MapReader: a computer vision pipeline for the semantic exploration of maps at scale. 2022
Wu, S. et al. A Closer Look at Segmentation Uncertainty of Scanned Historical Maps. 2022
Ståhl, N. et al. Identifying wetland areas in historical maps using deep convolutional neural networks. 2022
Content Description and Reasoning
5447768
content description
1
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22LGYXN6HQ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yang%20et%20al.%22%2C%22parsedDate%22%3A%222025-09-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYang%2C%20A.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F13658816.2025.2490701%26%23039%3B%26gt%3BEvaluating%20and%20enhancing%20spatial%20cognition%20abilities%20of%20large%20language%20models%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Evaluating%20and%20enhancing%20spatial%20cognition%20abilities%20of%20large%20language%20models%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Anran%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cheng%22%2C%22lastName%22%3A%22Fu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qingren%22%2C%22lastName%22%3A%22Jia%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weihua%22%2C%22lastName%22%3A%22Dong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mengyu%22%2C%22lastName%22%3A%22Ma%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hao%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fei%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hui%22%2C%22lastName%22%3A%22Wu%22%7D%5D%2C%22abstractNote%22%3A%22Large%20Language%20Models%20%28LLMs%29%20demonstrate%20various%20capabilities%20previously%20considered%20unique%20to%20humans.%20However%2C%20current%20evidence%20is%20insufficient%20to%20determine%20whether%20LLMs%20have%20developed%20spatial%20cognition%2C%20a%20fundamental%20aspect%20of%20human%20cognition%20underpinning%20logical-mathematical%20reasoning%20and%20various%20other%20skills.%20Previous%20studies%20on%20this%20topic%20have%20primarily%20concentrated%20on%20small-scale%20perceptions%2C%20leaving%20the%20spatial%20cognition%20within%20the%20context%20of%20GIScience%20largely%20unexamined.%20We%20introduce%20a%20benchmark%20that%20evaluates%20spatial%20cognition%20abilities%20across%20seven%20categories%20to%20systematically%20assess%20how%20well%20LLMs%20process%20and%20generate%20three%20types%20of%20spatial%20knowledge%3A%20landmark%2C%20route%2C%20and%20survey%20knowledge.%20Furthermore%2C%20we%20propose%20a%20tool-augmented%20approach%20named%20Hybrid%20Mind%2C%20which%20integrates%20LLMs%20with%20deterministic%20GIS%20algorithms%20to%20enhance%20their%20performance%20in%20spatial%20cognitive%20tasks.%20The%20core%20idea%20involves%20the%20implementation%20of%20a%20mental%20map%20builder%20that%20generates%20a%20quantitative%20map%20based%20on%20segmented%20qualitative%20constraints%2C%20overcoming%20LLMs%5Cu2019%20fallacies%20in%20synthesizing%20spatial%20information.%20Our%20experimental%20results%20revealed%20that%20although%20LLMs%20exhibited%20potential%20for%20spatial%20cognition%2C%20their%20performance%20was%20poor%20across%20most%20spatial%20cognitive%20tasks%2C%20particularly%20in%20constructing%20route%20and%20survey%20knowledge.%20The%20leading%20model%2C%20GPT-4-turbo%2C%20correctly%20answered%20fewer%20than%20one-fourth%20of%20the%20questions.%20In%20contrast%2C%20the%20Hybrid%20Mind%20approach%20significantly%20improved%20performance%2C%20correctly%20solving%2070.48%25%20of%20the%20questions.%22%2C%22date%22%3A%222025-09-02%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2025.2490701%22%2C%22ISSN%22%3A%221365-8816%2C%201362-3087%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F13658816.2025.2490701%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-12T12%3A16%3A02Z%22%7D%7D%2C%7B%22key%22%3A%22WBLSZ8A8%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Liu%20et%20al.%22%2C%22parsedDate%22%3A%222025-08-29%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLiu%2C%20Z.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2508.21491%26%23039%3B%26gt%3BGeospatial%20Question%20Answering%20on%20Historical%20Maps%20Using%20Spatio-Temporal%20Knowledge%20Graphs%20and%20Large%20Language%20Models%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Geospatial%20Question%20Answering%20on%20Historical%20Maps%20Using%20Spatio-Temporal%20Knowledge%20Graphs%20and%20Large%20Language%20Models%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ziyi%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sidi%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22Recent%20advances%20have%20enabled%20the%20extraction%20of%20vectorized%20features%20from%20digital%20historical%20maps.%20To%20fully%20leverage%20this%20information%2C%20however%2C%20the%20extracted%20features%20must%20be%20organized%20in%20a%20structured%20and%20meaningful%20way%20that%20supports%20efficient%20access%20and%20use.%20One%20promising%20approach%20is%20question%20answering%20%28QA%29%2C%20which%20allows%20users%20--%20especially%20those%20unfamiliar%20with%20database%20query%20languages%20--%20to%20retrieve%20knowledge%20in%20a%20natural%20and%20intuitive%20manner.%20In%20this%20project%2C%20we%20developed%20a%20GeoQA%20system%20by%20integrating%20a%20spatio-temporal%20knowledge%20graph%20%28KG%29%20constructed%20from%20historical%20map%20data%20with%20large%20language%20models%20%28LLMs%29.%20Specifically%2C%20we%20have%20defined%20the%20ontology%20to%20guide%20the%20construction%20of%20the%20spatio-temporal%20KG%20and%20investigated%20workflows%20of%20two%20different%20types%20of%20GeoQA%3A%20factual%20and%20descriptive.%20Additional%20data%20sources%2C%20such%20as%20historical%20map%20images%20and%20internet%20search%20results%2C%20are%20incorporated%20into%20our%20framework%20to%20provide%20extra%20context%20for%20descriptive%20GeoQA.%20Evaluation%20results%20demonstrate%20that%20the%20system%20can%20generate%20answers%20with%20a%20high%20delivery%20rate%20and%20a%20high%20semantic%20accuracy.%20To%20make%20the%20framework%20accessible%2C%20we%20further%20developed%20a%20web%20application%20that%20supports%20interactive%20querying%20and%20visualization.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2508.21491%22%2C%22date%22%3A%222025-08-29%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2508.21491%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2508.21491%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-10T19%3A58%3A52Z%22%7D%7D%2C%7B%22key%22%3A%2255893HNY%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yuan%20and%20Sester%22%2C%22parsedDate%22%3A%222025-06-09%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYuan%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fagile-giss.copernicus.org%5C%2Farticles%5C%2F6%5C%2F52%5C%2F2025%5C%2F%26%23039%3B%26gt%3BLeveraging%20LLMs%20and%20attention-mechanism%20for%20automatic%20annotation%20of%20historical%20maps%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Leveraging%20LLMs%20and%20attention-mechanism%20for%20automatic%20annotation%20of%20historical%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yunshuang%22%2C%22lastName%22%3A%22Yuan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Monika%22%2C%22lastName%22%3A%22Sester%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20maps%20are%20essential%20resources%20that%20provide%20insights%20into%20the%20geographical%20landscapes%20of%20the%20past.%20They%20serve%20as%20valuable%20tools%20for%20researchers%20across%20disciplines%20such%20as%20history%2C%20geography%2C%20and%20urban%20studies%2C%20facilitating%20the%20reconstruction%20of%20historical%20environments%20and%20the%20analysis%20of%20spatial%20transformations%20over%20time.%20However%2C%20when%20constrained%20to%20analogue%20or%20scanned%20formats%2C%20their%20interpretation%20is%20limited%20to%20humans%20and%20therefore%20not%20scalable.%20Recent%20advancements%20in%20machine%20learning%2C%20particularly%20in%20computer%20vision%20and%20large%20language%20models%20%28LLMs%29%2C%20have%20opened%20new%20avenues%20for%20automating%20the%20recognition%20and%20classification%20of%20features%20and%20objects%20in%20historical%20maps.%20In%20this%20paper%2C%20we%20propose%20a%20novel%20distillation%20method%20that%20leverages%20LLMs%20and%20attention%20mechanisms%20for%20the%20automatic%20annotation%20of%20historical%20maps.%20LLMs%20are%20employed%20to%20generate%20coarse%20classification%20labels%20for%20low-resolution%20historical%20image%20patches%2C%20while%20attention%20mechanisms%20are%20utilized%20to%20refine%20these%20labels%20to%20higher%20resolutions.%20Experimental%20results%20demonstrate%20that%20the%20refined%20labels%20achieve%20a%20high%20recall%20of%20more%20than%2090%25.%20Additionally%2C%20the%20intersection%20over%20union%20%28IoU%29%20scores%5Cu201484.2%25%20for%20Wood%20and%2072.0%25%20for%20Settlement%5Cu2014%20along%20with%20precision%20scores%20of%2087.1%25%20and%2079.5%25%2C%20respectively%2C%20indicate%20that%20most%20labels%20are%20well-aligned%20with%20ground-truth%20annotations.%20Notably%2C%20these%20results%20were%20achieved%20without%20the%20use%20of%20fine-grained%20manual%20labels%20during%20training%2C%20underscoring%20the%20potential%20of%20our%20approach%20for%20efficient%20and%20scalable%20historical%20map%20analysis.%22%2C%22date%22%3A%222025-06-09%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.5194%5C%2Fagile-giss-6-52-2025%22%2C%22ISSN%22%3A%222700-8150%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fagile-giss.copernicus.org%5C%2Farticles%5C%2F6%5C%2F52%5C%2F2025%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-12T12%3A12%3A19Z%22%7D%7D%2C%7B%22key%22%3A%22UJJE6AUA%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Liu%20et%20al.%22%2C%22parsedDate%22%3A%222025-06-09%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLiu%2C%20Z.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fagile-giss.copernicus.org%5C%2Farticles%5C%2F6%5C%2F5%5C%2F2025%5C%2F%26%23039%3B%26gt%3BAn%20Efficient%20System%20for%20Automatic%20Map%20Storytelling%3A%20A%20Case%20Study%20on%20Historical%20Maps%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22An%20Efficient%20System%20for%20Automatic%20Map%20Storytelling%3A%20A%20Case%20Study%20on%20Historical%20Maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ziyi%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Claudio%22%2C%22lastName%22%3A%22Affolter%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sidi%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yizi%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20maps%20provide%20valuable%20information%20and%20knowledge%20about%20the%20past.%20However%2C%20as%20they%20often%20feature%20non-standard%20projections%2C%20hand-drawn%20styles%2C%20and%20artistic%20elements%2C%20it%20is%20challenging%20for%20non-experts%20to%20identify%20and%20interpret%20them.%20While%20existing%20image%20captioning%20methods%20have%20achieved%20remarkable%20success%20on%20natural%20images%2C%20their%20performance%20on%20maps%20is%20suboptimal%20as%20maps%20are%20underrepresented%20in%20their%20pre-training%20process.%20Despite%20the%20recent%20advance%20of%20vision-enabled%20GPT%20models%20in%20text%20recognition%20and%20map%20captioning%2C%20they%20still%20have%20a%20limited%20understanding%20of%20maps%2C%20as%20their%20performance%20wanes%20when%20texts%20%28e.g.%2C%20titles%20and%20legends%29%20in%20maps%20are%20missing%20or%20inaccurate.%20Besides%2C%20it%20is%20inefficient%20or%20even%20impractical%20to%20fine-tune%20these%20models%20with%20users%26rsquo%3B%20own%20datasets.%20To%20address%20these%20problems%2C%20we%20propose%20a%20novel%20and%20lightweight%20map-captioning%20counterpart.%20Specifically%2C%20we%20fine-tune%20the%20state-of-the-art%20vision-language%20model%20CLIP%20to%20generate%20captions%20relevant%20to%20historical%20maps%20and%20enrich%20the%20captions%20with%20GPT%20models%20to%20tell%20a%20brief%20story%20regarding%20where%2C%20what%2C%20when%20and%20why%20of%20a%20given%20map.%20We%20propose%20a%20novel%20decision%20tree%20architecture%20to%20only%20generate%20captions%20relevant%20to%20the%20specified%20map%20type.%20Our%20system%20shows%20invariance%20to%20text%20alterations%20in%20maps.%20The%20system%20can%20be%20easily%20adapted%20and%20extended%20to%20other%20map%20types%20and%20scaled%20to%20a%20larger%20map%20captioning%20system.%22%2C%22date%22%3A%222025-06-09%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.5194%5C%2Fagile-giss-6-5-2025%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fagile-giss.copernicus.org%5C%2Farticles%5C%2F6%5C%2F5%5C%2F2025%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-08-14T14%3A27%3A45Z%22%7D%7D%2C%7B%22key%22%3A%22U5RGLR6B%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Griffin%20and%20and%20Robinson%22%2C%22parsedDate%22%3A%222025-04-03%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGriffin%2C%20A.L.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F23729333.2025.2481692%26%23039%3B%26gt%3BHow%20do%20people%20understand%20maps%20and%20will%20AI%20ever%20understand%20them%3F%26lt%3B%5C%2Fa%26gt%3B%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22How%20do%20people%20understand%20maps%20and%20will%20AI%20ever%20understand%20them%3F%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Amy%20L.%22%2C%22lastName%22%3A%22Griffin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Anthony%20C.%22%2C%22lastName%22%3A%22and%20Robinson%22%7D%5D%2C%22abstractNote%22%3A%22The%20human%20brain%20is%20an%20incredible%20piece%20of%20cognitive%20machinery.%20Maps%20are%20similarly%20incredible.%20Although%20not%20every%20map%20user%20finds%20understanding%20a%20map%20to%20be%20an%20easy%20task%2C%20like%20every%20tool%20that%20humans%20use%2C%20with%20practice%2C%20it%20is%20possible%20for%20most%20people%20to%20learn%20to%20understand%20and%20use%20maps%20effectively.%20Maps%20encode%20a%20large%20amount%20of%20knowledge%20and%20can%20be%20useful%20in%20supporting%20a%20wide%20variety%20of%20activities%2C%20both%20professional%20and%20personal.%20But%20what%20does%20the%20future%20hold%20for%20maps%3F%20In%20this%20research%2C%20we%20explore%20what%20we%20know%20about%20one%20rapidly%20evolving%20technology%2C%20artificial%20intelligence%20%28AI%29%2C%20and%20what%20it%20might%20mean%20for%20how%20maps%20%28do%29%20work.%5Cn%5CnRecent%20news%20articles%20with%20titles%20such%20as%20%5Cu2018Heard%20AI%20is%20coming%20for%20your%20job%3F%20For%20these%20copywriters%2C%20that%20%5Cu2018future%5Cu2019%20arrived%20months%20ago%5Cu2019%20have%20explored%20whether%20general-purpose%20AI%20tools%2C%20such%20as%20chatbots%2C%20can%20perform%20tasks%20traditionally%20carried%20out%20by%20people.%20Although%20cartographers%20have%20been%20experimenting%20with%20AI%20for%20making%20maps%20for%20several%20decades%20and%20researchers%20continue%20to%20develop%20new%20AI-supported%20approaches%20for%20making%20maps%20%28see%20Kang%20et%20al.%2C%20Citation2024%20for%20a%20recent%20review%29%2C%20we%20will%20address%20a%20different%20question%20here%3A%20How%20might%20AI%20tools%20affect%20how%20we%20use%20maps%3F%20This%2C%20in%20turn%2C%20prompts%20the%20related%2C%20more%20fundamental%20question%20of%20whether%20AI%20tools%20can%20understand%20maps.%5Cn%5CnTo%20explore%20these%20questions%20we%20must%20first%20know%20what%20it%20means%20to%20understand%20a%20map.%20This%20depends%20on%20how%20we%20conceptualize%20maps.%20Is%20a%20map%20a%20designed%20artifact%20through%20which%20its%20maker%20aims%20to%20communicate%20a%20specific%20message%3F%20Is%20it%20a%20collection%20of%20arguments%20or%20propositions%20about%20the%20world%20that%20enable%20action%20to%20be%20taken%3F%20Is%20it%20a%20storehouse%20of%20information%20that%20is%20ready%20to%20be%20mined%20for%20wisdom%3F%20Or%20is%20it%20an%20aesthetic%20object%20%28perhaps%20even%20an%20artistic%20object%29%20that%20can%20be%20evocative%2C%20communicative%2C%20or%20both%3F%20We%20focus%20here%20on%20maps%20as%20informational%20and%20analytical%20graphics%2C%20recognizing%20that%20understanding%20maps%20as%20artistic%20and%20asethetic%20objects%20deserves%20its%20own%20fulsome%20treatment.%20We%20begin%20by%20looking%20at%20a%20selection%20of%20existing%20cartographic%20theoretical%20frameworks%20to%20examine%20what%20it%20means%20to%20understand%20a%20map.%20We%20can%20then%20use%20these%20ideas%20to%20assess%20to%20what%20extent%20today%5Cu2019s%20AI%20tools%20can%20understand%20maps%20and%20explore%20how%20their%20ability%20to%20do%20so%20might%20develop%20as%20AI%20tools%20continue%20to%20evolve%20in%20the%20future.%22%2C%22date%22%3A%222025-04-03%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F23729333.2025.2481692%22%2C%22ISSN%22%3A%222372-9333%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F23729333.2025.2481692%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-07-31T16%3A25%3A36Z%22%7D%7D%2C%7B%22key%22%3A%22KK6ZAZ6Y%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Liang%20et%20al.%22%2C%22parsedDate%22%3A%222025-04-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLiang%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0198971524001571%26%23039%3B%26gt%3BGeoAI-enhanced%20community%20detection%20on%20spatial%20networks%20with%20graph%20deep%20learning%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22GeoAI-enhanced%20community%20detection%20on%20spatial%20networks%20with%20graph%20deep%20learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yunlei%22%2C%22lastName%22%3A%22Liang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiawei%22%2C%22lastName%22%3A%22Zhu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wen%22%2C%22lastName%22%3A%22Ye%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Song%22%2C%22lastName%22%3A%22Gao%22%7D%5D%2C%22abstractNote%22%3A%22Spatial%20networks%20are%20useful%20for%20modeling%20geographic%20phenomena%20where%20spatial%20interaction%20plays%20an%20important%20role.%20To%20analyze%20the%20spatial%20networks%20and%20their%20internal%20structures%2C%20graph-based%20methods%20such%20as%20community%20detection%20have%20been%20widely%20used.%20Community%20detection%20aims%20to%20extract%20strongly%20connected%20components%20from%20the%20network%20and%20reveal%20the%20hidden%20relationships%20between%20nodes%2C%20but%20they%20usually%20do%20not%20involve%20the%20attribute%20information.%20To%20consider%20edge-based%20interactions%20and%20node%20attributes%20together%2C%20this%20study%20proposed%20a%20family%20of%20GeoAI-enhanced%20unsupervised%20community%20detection%20methods%20called%20region2vec%20based%20on%20Graph%20Attention%20Networks%20%28GAT%29%20and%20Graph%20Convolutional%20Networks%20%28GCN%29.%20The%20region2vec%20methods%20generate%20node%20neural%20embeddings%20based%20on%20attribute%20similarity%2C%20geographic%20adjacency%20and%20spatial%20interactions%2C%20and%20then%20extract%20network%20communities%20based%20on%20node%20embeddings%20using%20agglomerative%20clustering.%20The%20proposed%20GeoAI-based%20methods%20are%20compared%20with%20multiple%20baselines%20and%20perform%20the%20best%20when%20one%20wants%20to%20maximize%20node%20attribute%20similarity%20and%20spatial%20interaction%20intensity%20simultaneously%20within%20the%20spatial%20network%20communities.%20It%20is%20further%20applied%20in%20the%20shortage%20area%20delineation%20problem%20in%20public%20health%20and%20demonstrates%20its%20promise%20in%20regionalization%20problems.%22%2C%22date%22%3A%222025-04-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.compenvurbsys.2024.102228%22%2C%22ISSN%22%3A%220198-9715%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0198971524001571%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-06-30T14%3A28%3A55Z%22%7D%7D%2C%7B%22key%22%3A%22XQLFZC3W%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Xing%20et%20al.%22%2C%22parsedDate%22%3A%222025-03-18%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BXing%2C%20S.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2503.14607%26%23039%3B%26gt%3BCan%20Large%20Vision%20Language%20Models%20Read%20Maps%20Like%20a%20Human%3F%26lt%3B%5C%2Fa%26gt%3B%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Can%20Large%20Vision%20Language%20Models%20Read%20Maps%20Like%20a%20Human%3F%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shuo%22%2C%22lastName%22%3A%22Xing%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zezhou%22%2C%22lastName%22%3A%22Sun%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shuangyu%22%2C%22lastName%22%3A%22Xie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kaiyuan%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yanjia%22%2C%22lastName%22%3A%22Huang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yuping%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiachen%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dezhen%22%2C%22lastName%22%3A%22Song%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhengzhong%22%2C%22lastName%22%3A%22Tu%22%7D%5D%2C%22abstractNote%22%3A%22In%20this%20paper%2C%20we%20introduce%20MapBench-the%20first%20dataset%20specifically%20designed%20for%20human-readable%2C%20pixel-based%20map-based%20outdoor%20navigation%2C%20curated%20from%20complex%20path%20finding%20scenarios.%20MapBench%20comprises%20over%201600%20pixel%20space%20map%20path%20finding%20problems%20from%20100%20diverse%20maps.%20In%20MapBench%2C%20LVLMs%20generate%20language-based%20navigation%20instructions%20given%20a%20map%20image%20and%20a%20query%20with%20beginning%20and%20end%20landmarks.%20For%20each%20map%2C%20MapBench%20provides%20Map%20Space%20Scene%20Graph%20%28MSSG%29%20as%20an%20indexing%20data%20structure%20to%20convert%20between%20natural%20language%20and%20evaluate%20LVLM-generated%20results.%20We%20demonstrate%20that%20MapBench%20significantly%20challenges%20state-of-the-art%20LVLMs%20both%20zero-shot%20prompting%20and%20a%20Chain-of-Thought%20%28CoT%29%20augmented%20reasoning%20framework%20that%20decomposes%20map%20navigation%20into%20sequential%20cognitive%20processes.%20Our%20evaluation%20of%20both%20open-source%20and%20closed-source%20LVLMs%20underscores%20the%20substantial%20difficulty%20posed%20by%20MapBench%2C%20revealing%20critical%20limitations%20in%20their%20spatial%20reasoning%20and%20structured%20decision-making%20capabilities.%20We%20release%20all%20the%20code%20and%20dataset%20in%20https%3A%5C%2F%5C%2Fgithub.com%5C%2Ftaco-group%5C%2FMapBench.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2503.14607%22%2C%22date%22%3A%222025-03-18%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2503.14607%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2503.14607%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-06-30T14%3A28%3A58Z%22%7D%7D%2C%7B%22key%22%3A%22UDPIRBWQ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Van%20Staden%22%2C%22parsedDate%22%3A%222025-03-04%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BVan%20Staden%2C%20C.J.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fupjournals.up.ac.za%5C%2Findex.php%5C%2Fjogea%5C%2Farticle%5C%2Fview%5C%2F5442%26%23039%3B%26gt%3BGeography%26%23039%3Bs%20ability%20to%20interpret%20topographic%20maps%20and%20orthophotographs%2C%20adoption%20prediction%2C%20and%20learning%20activities%20to%20promote%20responsible%20usage%20in%20classrooms%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Geography%27s%20ability%20to%20interpret%20topographic%20maps%20and%20orthophotographs%2C%20adoption%20prediction%2C%20and%20learning%20activities%20to%20promote%20responsible%20usage%20in%20classrooms%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Christina%20Johanna%22%2C%22lastName%22%3A%22Van%20Staden%22%7D%5D%2C%22abstractNote%22%3A%22Learners%20find%20it%20challenging%20to%20interpret%20topographic%20maps%20and%20orthophotographs.%20Although%20the%20chatbot%2C%20Geography%2C%20might%20be%20useful%20for%20this%20purpose%2C%20not%20research%20is%20available%20regarding%20its%20abilities%20and%20limitations.%20Thus%2C%20the%20aim%20of%20this%20multi-phase%20mixed%20methods%20research%20was%20threefold%2C%20namely%20%28a%29%20to%20explore%20Geography%26%23039%3Bstopographic%20map%20and%20orthophotograph%20interpretation%20skills%2C%20%28b%29%20to%20determine%20the%20factors%20which%20can%20drive%20its%20adoption%2C%20and%20to%20%28c%29%20suggest%20learning%20activities%20to%20promote%20responsible%20usage%20in%20geography%20classrooms.%20During%20the%20first%20phase%2C%20an%20explorative%20case%20revealed%20that%20Geography%20can%20be%20useful%20to%20interpret%20topographic%20maps%20and%20orthophotographs%2C%20but%20it%20can%20also%20fabricate%20facts.%20During%20the%20second%20phase%2C%20the%20high%20adoption%20prediction%20score%20%28six%20out%20of%20nine%29%2C%20indicated%20a%20need%20to%20responsible%20usage.%20Thus%2C%20learning%20activities%20were%20designed%20during%20the%20last%20phase%20to%20promote%20responsible%20use%20in%20geography%20classrooms.%22%2C%22date%22%3A%222025-03-04%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.46622%5C%2Fjogea.v8i1.5442%22%2C%22ISSN%22%3A%2227889114%2C%2027889114%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fupjournals.up.ac.za%5C%2Findex.php%5C%2Fjogea%5C%2Farticle%5C%2Fview%5C%2F5442%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-10T20%3A09%3A46Z%22%7D%7D%2C%7B%22key%22%3A%22FV4PYZUL%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Memduho%5Cu011flu%22%2C%22parsedDate%22%3A%222025-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BMemduho%5Cu011flu%2C%20A.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F14%5C%2F1%5C%2F35%26%23039%3B%26gt%3BTowards%20AI-Assisted%20Mapmaking%3A%20Assessing%20the%20Capabilities%20of%20GPT-4o%20in%20Cartographic%20Design%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Towards%20AI-Assisted%20Mapmaking%3A%20Assessing%20the%20Capabilities%20of%20GPT-4o%20in%20Cartographic%20Design%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Abdulkadir%22%2C%22lastName%22%3A%22Memduho%5Cu011flu%22%7D%5D%2C%22abstractNote%22%3A%22Cartographic%20design%20is%20fundamental%20to%20effective%20mapmaking%2C%20requiring%20adherence%20to%20principles%20such%20as%20visual%20hierarchy%2C%20symbolization%2C%20and%20color%20theory%20to%20convey%20spatial%20information%20accurately%20and%20intuitively%2C%20while%20Artificial%20Intelligence%20%28AI%29%20and%20Large%20Language%20Models%20%28LLMs%29%20have%20transformed%20various%20fields%2C%20their%20application%20in%20cartographic%20design%20remains%20underexplored.%20This%20study%20assesses%20the%20capabilities%20of%20a%20multimodal%20advanced%20LLM%2C%20GPT-4o%2C%20in%20understanding%20and%20suggesting%20cartographic%20design%20elements%2C%20focusing%20on%20adherence%20to%20established%20cartographic%20principles.%20Two%20assessments%20were%20conducted%3A%20a%20text-to-text%20evaluation%20and%20an%20image-to-text%20evaluation.%20In%20the%20text-to-text%20assessment%2C%20GPT-4o%20was%20presented%20with%2015%20queries%20derived%20from%20key%20concepts%20in%20cartography%2C%20covering%20classification%2C%20symbolization%2C%20visual%20hierarchy%2C%20color%20theory%2C%20and%20typography.%20Each%20query%20was%20posed%20multiple%20times%20under%20different%20temperature%20settings%20to%20evaluate%20consistency%20and%20variability.%20In%20the%20image-to-text%20evaluation%2C%20GPT-4o%20analyzed%20maps%20containing%20deliberate%20cartographic%20errors%20to%20assess%20its%20ability%20to%20identify%20issues%20and%20suggest%20improvements.%20The%20results%20indicate%20that%20GPT-4o%20demonstrates%20general%20reliability%20in%20text-based%20tasks%2C%20with%20variability%20influenced%20by%20temperature%20settings.%20The%20model%20showed%20proficiency%20in%20classification%20and%20symbolization%20tasks%20but%20occasionally%20deviated%20from%20theoretical%20expectations.%20In%20visual%20hierarchy%20and%20layout%2C%20the%20model%20performed%20consistently%2C%20suggesting%20appropriate%20design%20choices.%20In%20the%20image-to-text%20assessment%2C%20GPT-4o%20effectively%20identified%20critical%20design%20flaws%20such%20as%20inappropriate%20color%20schemes%2C%20poor%20contrast%20and%20misuse%20of%20shape%20and%20size%20variables%2C%20offering%20actionable%20suggestions%20for%20improvement.%20However%2C%20limitations%20include%20dependency%20on%20input%20quality%20and%20challenges%20in%20interpreting%20nuanced%20spatial%20relationships.%20The%20study%20concludes%20that%20LLMs%20like%20GPT-4o%20have%20significant%20potential%20in%20cartographic%20design%2C%20particularly%20for%20tasks%20involving%20creative%20exploration%20and%20routine%20design%20support.%20Their%20ability%20to%20critique%20and%20generate%20cartographic%20elements%20positions%20them%20as%20valuable%20tools%20for%20enhancing%20human%20expertise.%20Further%20research%20is%20recommended%20to%20enhance%20their%20spatial%20reasoning%20capabilities%20and%20expand%20their%20use%20of%20visual%20variables%20beyond%20color%2C%20thereby%20improving%20their%20applicability%20in%20professional%20cartographic%20workflows.%22%2C%22date%22%3A%222025%5C%2F1%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi14010035%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F14%5C%2F1%5C%2F35%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-06-30T14%3A28%3A51Z%22%7D%7D%2C%7B%22key%22%3A%225ZZC5BUB%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhang%20et%20al.%22%2C%22parsedDate%22%3A%222025%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZhang%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2025.2455112%26%23039%3B%26gt%3BMapReader%3A%20a%20framework%20for%20learning%20a%20visual%20language%20model%20for%20map%20analysis%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22MapReader%3A%20a%20framework%20for%20learning%20a%20visual%20language%20model%20for%20map%20analysis%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yifan%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenbo%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ziyi%22%2C%22lastName%22%3A%22Zeng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Keying%22%2C%22lastName%22%3A%22Jiang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jingxuan%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wen%22%2C%22lastName%22%3A%22Min%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wei%22%2C%22lastName%22%3A%22Luo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qingfeng%22%2C%22lastName%22%3A%22Guan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jianfeng%22%2C%22lastName%22%3A%22Lin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenhao%22%2C%22lastName%22%3A%22Yu%22%7D%5D%2C%22abstractNote%22%3A%22Intelligent%20map%20analysis%20is%20an%20important%20yet%20challenging%20topic.%20Recently%2C%20the%20development%20of%20large%20models%2C%20especially%20Visual%20Language%20Models%20%28VLMs%29%2C%20has%20shown%20potential%20for%20intelligent%20image%20analysis.%20However%2C%20these%20models%20are%20primarily%20trained%20on%20natural%20images%2C%20which%20have%20intrinsic%20differences%20from%20maps.%20Consequently%2C%20there%20remains%20a%20gap%20in%20applying%20existing%20general-domain%20VLMs%20to%20map%20analysis.%20To%20address%20this%20issue%2C%20we%20propose%20a%20framework%20for%20developing%20a%20specialized%20VLM%2C%20called%20MapReader.%20To%20achieve%20this%20goal%2C%20a%20comprehensive%20data%20resource%20is%20collected%20using%20a%20strategy%20that%20combines%20self-instruct%20with%20expert%20refinement%2C%20including%20training%20data%20%28MapTrain%3A%202%2C000%20pairs%20of%20maps%20and%20descriptions%29%20and%20evaluation%20data%20%28MapEval%3A%20250%20maps%20and%20500%20map-related%20questions%29.%20Based%20on%20the%20training%20data%2C%20MapReader%20is%20fine-tuned%20on%20top%20of%20a%20general-domain%20VLM%20to%20learn%20to%20understand%20and%20describe%20map%20contents.%20The%20evaluation%20results%20on%20MapEval%20suggest%20that%3A%20%281%29%20MapReader%20can%20accept%20map%20inputs%20and%20generate%20detailed%20descriptions%20of%20core%20geographic%20information%2C%20and%20it%20also%20possesses%20visual%20question-answering%20capabilities%2C%20showing%20potential%20for%20application%20in%20various%20map%20analysis%20scenarios%2C%20such%20as%20accessible%20map%20reading%20and%20robotic%20map%20usage%3B%20%282%29%20The%20proposed%20data%20collection%20strategy%20is%20effective%2C%20and%20the%20collected%20dataset%20can%20serve%20as%20a%20benchmark%20to%20promote%20further%20map%20analysis%20research.%22%2C%22date%22%3A%222025%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2025.2455112%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2025.2455112%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-11T15%3A29%3A29Z%22%7D%7D%2C%7B%22key%22%3A%22KT3SXWHU%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Lipka%20et%20al.%22%2C%22parsedDate%22%3A%222024-10-14%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLipka%2C%20K.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2076-3417%5C%2F14%5C%2F20%5C%2F9343%26%23039%3B%26gt%3BThe%20Use%20of%20Language%20Models%20to%20Support%20the%20Development%20of%20Cartographic%20Descriptions%20of%20a%20Building%26%23039%3Bs%20Interior%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22The%20Use%20of%20Language%20Models%20to%20Support%20the%20Development%20of%20Cartographic%20Descriptions%20of%20a%20Building%27s%20Interior%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Krzysztof%22%2C%22lastName%22%3A%22Lipka%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dariusz%22%2C%22lastName%22%3A%22Gotlib%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kamil%22%2C%22lastName%22%3A%22Choroma%5Cu0144ski%22%7D%5D%2C%22abstractNote%22%3A%22The%20development%20and%20popularization%20of%20navigation%20applications%20are%20increasing%20expectations%20for%20their%20quality%20and%20functionality.%20Users%20need%20continuous%20navigation%20not%20only%20outdoors%2C%20but%20also%20indoors.%20In%20this%20case%2C%20however%2C%20the%20perception%20of%20space%20and%20movement%20is%20somewhat%20different%20than%20it%20is%20outside.%20One%20potential%20method%20of%20meeting%20this%20need%20may%20be%20the%20use%20of%20so-called%20geo-descriptions%5Cu2014multi-level%20textual%20descriptions%20relating%20to%20a%20point%2C%20line%20or%20area%20in%20a%20building.%20Currently%2C%20geo-descriptions%20are%20created%20manually.%20However%2C%20this%20is%20a%20rather%20time-consuming%20and%20complex%20process.%20Therefore%2C%20this%20study%20undertook%20to%20automate%20this%20process%20as%20much%20as%20possible.%20The%20study%20uses%20classical%20methods%20of%20spatial%20analysis%20from%20GIS%20systems%20and%20text%20generation%20methods%20based%20on%20artificial%20intelligence%20%28AI%29%20techniques%2C%20i.e.%2C%20large%20language%20models%20%28LLM%29.%20In%20this%20article%2C%20special%20attention%20will%20be%20paid%20to%20the%20second%20group%20of%20methods.%20As%20part%20of%20the%20first%20stage%20of%20the%20research%2C%20which%20was%20aimed%20at%20testing%20the%20proposed%20concept%2C%20the%20possibility%20of%20LLMs%20creating%20a%20natural%20description%20of%20space%20based%20on%20a%20list%20of%20features%20of%20a%20given%20place%20obtained%20by%20other%20methods%20%28input%20parameters%20for%20AI%29%2C%20such%20as%20coordinates%20and%20categories%20of%20rooms%20around%20a%20given%20point%2C%20etc.%2C%20was%20tested.%20The%20focus%20is%20on%20interior%20spaces%20and%20a%20few%20selected%20features%20of%20a%20particular%20place.%20In%20the%20next%20stages%2C%20it%20is%20planned%20to%20extend%20the%20research%20to%20spaces%20outside%20buildings.%20In%20addition%2C%20artificial%20intelligence%20can%20be%20used%20to%20provide%20the%20input%20parameters%20mentioned%20above.%22%2C%22date%22%3A%222024-10-14%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fapp14209343%22%2C%22ISSN%22%3A%222076-3417%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2076-3417%5C%2F14%5C%2F20%5C%2F9343%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-10T19%3A19%3A50Z%22%7D%7D%2C%7B%22key%22%3A%22XHKSV937%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Jaafar%20et%20al.%22%2C%22parsedDate%22%3A%222024-10-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BJaafar%2C%20S.A.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.planningmalaysia.org%5C%2Findex.php%5C%2Fpmj%5C%2Farticle%5C%2Fview%5C%2F1589%26%23039%3B%26gt%3BTHE%20ROLE%20OF%20GEOSPATIAL%20ARTIFICIAL%20INTELLIGENCE%20%28GEOAI%29%20IN%20SMART%20BUILT%20ENVIRONMENT%20MAPPING%3A%20AUTOMATIC%20OBJECT%20DETECTION%20OF%20RASTER%20TOPOGRAPHIC%20MAPS%20IN%20MALAYSIA%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22THE%20ROLE%20OF%20GEOSPATIAL%20ARTIFICIAL%20INTELLIGENCE%20%28GEOAI%29%20IN%20SMART%20BUILT%20ENVIRONMENT%20MAPPING%3A%20AUTOMATIC%20OBJECT%20DETECTION%20OF%20RASTER%20TOPOGRAPHIC%20MAPS%20IN%20MALAYSIA%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Saiful%20Anuar%22%2C%22lastName%22%3A%22Jaafar%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Abdul%20Rauf%20Abdul%22%2C%22lastName%22%3A%22Rasam%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Eran%20Sadek%20Said%20Md%22%2C%22lastName%22%3A%22Sadek%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Norizan%20Mat%22%2C%22lastName%22%3A%22Diah%22%7D%5D%2C%22abstractNote%22%3A%22Smart%20built%20environment%20mapping%20is%20integrating%20Geospatial%20Artificial%20Intelligence%20%28GeoAI%29%20to%20enable%20advanced%20analysis%2C%20pattern%20recognition%2C%20and%20decision-making%20processes.%20This%20shift%20in%20understanding%2C%20planning%2C%20designing%2C%20and%20managing%20the%20built%20environment%20is%20paving%20the%20way%20for%20a%20smarter%2C%20more%20sustainable%20future.%20This%20commentary%20explores%20the%20current%20role%20of%20AI%20in%20enhancing%20technology%20use%20within%20the%20geospatial%20field%2C%20focusing%20specifically%20on%20the%20application%20of%20GeoAI%20in%20mapping%20the%20built%20environment.%20Additionally%2C%20the%20paper%20presents%20a%20selection%20of%20case%20studies%20related%20to%20the%20implementation%20of%20AI%20in%20developing%20automatic%20vectorization%2C%20particularly%20for%20geospatial%20mapping%20in%20built%20environments.%20This%20research%20demonstrates%20the%20effectiveness%20of%20using%20Convolutional%20Neural%20Network%20%28CNN%29%20models%20for%20sorting%20objects%20in%20scanned%2C%20old%20topographic%20maps%20of%20the%20built%20environment.%20The%20findings%20of%20this%20study%20are%20valuable%20for%20making%20informed%20decisions%2C%20devising%20effective%20strategies%2C%20and%20identifying%20opportunities%20for%20further%20research%20and%20exploration%20within%20the%20dynamic%20field%20of%20GeoAI%20in%20smart%20built%20environment%20mapping%20and%20applications.%22%2C%22date%22%3A%222024-10-01%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.21837%5C%2Fpm.v22i34.1589%22%2C%22ISSN%22%3A%220128-0945%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.planningmalaysia.org%5C%2Findex.php%5C%2Fpmj%5C%2Farticle%5C%2Fview%5C%2F1589%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-19T18%3A47%3A04Z%22%7D%7D%2C%7B%22key%22%3A%22ZRH9GXWR%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Xu%20and%20Tao%22%2C%22parsedDate%22%3A%222024-04%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BXu%2C%20J.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F13%5C%2F4%5C%2F127%26%23039%3B%26gt%3BMap%20Reading%20and%20Analysis%20with%20GPT-4V%28ision%29%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Map%20Reading%20and%20Analysis%20with%20GPT-4V%28ision%29%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jinwen%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ran%22%2C%22lastName%22%3A%22Tao%22%7D%5D%2C%22abstractNote%22%3A%22In%20late%202023%2C%20the%20image-reading%20capability%20added%20to%20a%20Generative%20Pre-trained%20Transformer%20%28GPT%29%20framework%20provided%20the%20opportunity%20to%20potentially%20revolutionize%20the%20way%20we%20view%20and%20understand%20geographic%20maps%2C%20the%20core%20component%20of%20cartography%2C%20geography%2C%20and%20spatial%20data%20science.%20In%20this%20study%2C%20we%20explore%20reading%20and%20analyzing%20maps%20with%20the%20latest%20version%20of%20GPT-4-vision-preview%20%28GPT-4V%29%2C%20to%20fully%20evaluate%20its%20advantages%20and%20disadvantages%20in%20comparison%20with%20human%20eye-based%20visual%20inspections.%20We%20found%20that%20GPT-4V%20is%20able%20to%20properly%20retrieve%20information%20from%20various%20types%20of%20maps%20in%20different%20scales%20and%20spatiotemporal%20resolutions.%20GPT-4V%20can%20also%20perform%20basic%20map%20analysis%2C%20such%20as%20identifying%20visual%20changes%20before%20and%20after%20a%20natural%20disaster.%20It%20has%20the%20potential%20to%20replace%20human%20efforts%20by%20examining%20batches%20of%20maps%2C%20accurately%20extracting%20information%20from%20maps%2C%20and%20linking%20observed%20patterns%20with%20its%20pre-trained%20large%20dataset.%20However%2C%20it%20is%20encumbered%20by%20limitations%20such%20as%20diminished%20accuracy%20in%20visual%20content%20extraction%20and%20a%20lack%20of%20validation.%20This%20paper%20sets%20an%20example%20of%20effectively%20using%20GPT-4V%20for%20map%20reading%20and%20analytical%20tasks%2C%20which%20is%20a%20promising%20application%20for%20large%20multimodal%20models%2C%20large%20language%20models%2C%20and%20artificial%20intelligence.%22%2C%22date%22%3A%222024%5C%2F4%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi13040127%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F13%5C%2F4%5C%2F127%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T14%3A43%3A50Z%22%7D%7D%5D%7D
Yang, A. et al. Evaluating and enhancing spatial cognition abilities of large language models. 2025
Yuan, Y. et al. Leveraging LLMs and attention-mechanism for automatic annotation of historical maps. 2025
Liu, Z. et al. An Efficient System for Automatic Map Storytelling: A Case Study on Historical Maps. 2025
Griffin, A.L. et al. How do people understand maps and will AI ever understand them? 2025
Liang, Y. et al. GeoAI-enhanced community detection on spatial networks with graph deep learning. 2025
Xing, S. et al. Can Large Vision Language Models Read Maps Like a Human? 2025
Zhang, Y. et al. MapReader: a framework for learning a visual language model for map analysis. 2025
Xu, J. et al. Map Reading and Analysis with GPT-4V(ision). 2024
Metadata Retrieval
5447768
metadata retrieval
1
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22H65VKG62%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Kirsanova%20et%20al.%22%2C%22parsedDate%22%3A%222025-10-09%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKirsanova%2C%20S.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2510.08385%26%23039%3B%26gt%3BDetecting%20Legend%20Items%20on%20Historical%20Maps%20Using%20GPT-4o%20with%20In-Context%20Learning%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Detecting%20Legend%20Items%20on%20Historical%20Maps%20Using%20GPT-4o%20with%20In-Context%20Learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sofia%22%2C%22lastName%22%3A%22Kirsanova%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yao-Yi%22%2C%22lastName%22%3A%22Chiang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weiwei%22%2C%22lastName%22%3A%22Duan%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20map%20legends%20are%20critical%20for%20interpreting%20cartographic%20symbols.%20However%2C%20their%20inconsistent%20layouts%20and%20unstructured%20formats%20make%20automatic%20extraction%20challenging.%20Prior%20work%20focuses%20primarily%20on%20segmentation%20or%20general%20optical%20character%20recognition%20%28OCR%29%2C%20with%20few%20methods%20effectively%20matching%20legend%20symbols%20to%20their%20corresponding%20descriptions%20in%20a%20structured%20manner.%20We%20present%20a%20method%20that%20combines%20LayoutLMv3%20for%20layout%20detection%20with%20GPT-4o%20using%20in-context%20learning%20to%20detect%20and%20link%20legend%20items%20and%20their%20descriptions%20via%20bounding%20box%20predictions.%20Our%20experiments%20show%20that%20GPT-4%20with%20structured%20JSON%20prompts%20outperforms%20the%20baseline%2C%20achieving%2088%25%20F-1%20and%2085%25%20IoU%2C%20and%20reveal%20how%20prompt%20design%2C%20example%20counts%2C%20and%20layout%20alignment%20affect%20performance.%20This%20approach%20supports%20scalable%2C%20layout-aware%20legend%20parsing%20and%20improves%20the%20indexing%20and%20searchability%20of%20historical%20maps%20across%20various%20visual%20styles.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2510.08385%22%2C%22date%22%3A%222025-10-09%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2510.08385%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2510.08385%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-10-28T16%3A44%3A05Z%22%7D%7D%2C%7B%22key%22%3A%22NBGTCX2L%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Tarafder%20et%20al.%22%2C%22parsedDate%22%3A%222025%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BTarafder%2C%20E.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Flinkinghub.elsevier.com%5C%2Fretrieve%5C%2Fpii%5C%2FB9780443247125000105%26%23039%3B%26gt%3BEnriching%20the%20metadata%20of%20map%20images%3A%20a%20deep%20learning%20approach%20with%20geographic%20information%20systems-based%20data%20augmentation%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22bookSection%22%2C%22title%22%3A%22Enriching%20the%20metadata%20of%20map%20images%3A%20a%20deep%20learning%20approach%20with%20geographic%20information%20systems-based%20data%20augmentation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Entaj%22%2C%22lastName%22%3A%22Tarafder%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sharmili%22%2C%22lastName%22%3A%22Khatun%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Muhammad%22%2C%22lastName%22%3A%22Awais%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Muhammad%22%2C%22lastName%22%3A%22Qayyum%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mir%20Muhammad%22%2C%22lastName%22%3A%22Nizamani%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yong%22%2C%22lastName%22%3A%22Wang%22%7D%5D%2C%22abstractNote%22%3A%22The%20intricate%20tapestry%20of%20geospatial%20data%2C%20when%20combined%20with%20the%20prowess%20of%20modern%20deep%20learning%20techniques%2C%20offers%20transformative%20possibilities%2C%20particularly%20in%20enriching%20map%20metadata.%20This%20book%20delves%20into%20the%20convergence%20of%20geographic%20information%20systems%20%28GIS%29%20and%20deep%20learning%20as%20a%20novel%20approach%20for%20enhancing%20map%20metadata.%20Beginning%20with%20the%20foundational%20understanding%20of%20the%20significance%20of%20rich%20metadata%2C%20it%20navigates%20through%20the%20challenges%20of%20traditional%20enrichment%20methods.%20By%20illustrating%20the%20capabilities%20of%20deep%20learning%20in%20understanding%20complex%20spatial%20patterns%2C%20the%20work%20introduces%20an%20innovative%20methodology%20that%20synergistically%20integrates%20GIS%20data%20with%20deep%20learning%20models.%20Detailed%20case%20studies%20offer%20practical%20insights%20into%20the%20methodology%5Cu2019s%20application%2C%20while%20rigorous%20results%20and%20evaluations%20validate%20its%20efficacy.%20Despite%20its%20promising%20outcomes%2C%20the%20approach%5Cu2019s%20limitations%20are%20candidly%20discussed%2C%20followed%20by%20potential%20solutions.%20The%20book%20concludes%20with%20an%20exploration%20of%20future%20prospects%2C%20highlighting%20emerging%20trends%20and%20potential%20applications%20in%20the%20ever-evolving%20intersection%20of%20GIS%2C%20deep%20learning%2C%20and%20map%20metadata.%20Overall%2C%20this%20work%20is%20a%20clarion%20call%20for%20researchers%20and%20practitioners%20to%20harness%20the%20combined%20might%20of%20GIS%20and%20deep%20learning%2C%20offering%20a%20richer%2C%20more%20nuanced%20understanding%20of%20our%20world%20through%20enhanced%20map%20metadata.%22%2C%22bookTitle%22%3A%22Deep%20Learning%20for%20Earth%20Observation%20and%20Climate%20Monitoring%22%2C%22date%22%3A%222025%22%2C%22language%22%3A%22en%22%2C%22ISBN%22%3A%22978-0-443-24712-5%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Flinkinghub.elsevier.com%5C%2Fretrieve%5C%2Fpii%5C%2FB9780443247125000105%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-12T12%3A16%3A50Z%22%7D%7D%2C%7B%22key%22%3A%22BKYDKT92%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Xiaoying%20et%20al.%22%2C%22parsedDate%22%3A%222025%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BXiaoying%2C%20Q.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1002%5C%2Fasi.25023%26%23039%3B%26gt%3BSemantic%20organization%20for%20historical%20maps%3A%20Classification%2C%20representation%2C%20association%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Semantic%20organization%20for%20historical%20maps%3A%20Classification%2C%20representation%2C%20association%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qi%22%2C%22lastName%22%3A%22Xiaoying%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Alton%20Y.%20K.%22%2C%22lastName%22%3A%22Chua%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yang%22%2C%22lastName%22%3A%22Haiping%22%7D%5D%2C%22abstractNote%22%3A%22Given%20that%20historical%20maps%20%28HM%29%20are%20represented%20by%20a%20complex%20network%20of%20symbols%2C%20their%20semantics%20cannot%20be%20easily%20and%20directly%20understood.%20To%20extract%20the%20embedded%20knowledge%2C%20scholars%20have%20developed%20semantic%20organization%20for%20different%20types%20of%20HM.%20However%2C%20the%20construction%20of%20semantic%20organization%20for%20HM%20is%20challenging%20due%20to%20problems%20of%20semantic%20clutter%2C%20semantic%20loss%2C%20and%20semantic%20ambiguity.%20To%20resolve%20these%20problems%2C%20this%20paper%20proposes%20a%20semantic%20organization%20system%20which%20includes%20classification%2C%20representation%2C%20and%20association%20mechanisms%20for%20HM.%20The%20intent%20is%20to%20achieve%20semantic%20ordering%2C%20semantic%20enhancement%2C%20and%20semantic%20association.%20As%20a%20means%20to%20verify%20the%20proposed%20semantic%20organization%20system%2C%20this%20paper%20develops%20an%20HM%20knowledge%20question%20and%20answer%20%28Q%26amp%3BA%29%20system.%20Experimental%20results%20show%20that%20the%20Q%26amp%3BA%20system%20outperformed%20Baidu%20%28Wenxinyiyan%29%20and%20GPT-4o%20in%20terms%20of%20precision%20and%20recall.%22%2C%22date%22%3A%222025%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1002%5C%2Fasi.25023%22%2C%22ISSN%22%3A%222330-1643%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1002%5C%2Fasi.25023%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-26T19%3A36%3A53Z%22%7D%7D%2C%7B%22key%22%3A%22L5K3D4CJ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wen%20et%20al.%22%2C%22parsedDate%22%3A%222024-11-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWen%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2024.2368574%26%23039%3B%26gt%3BMulti-task%20deep%20learning%20strategy%20for%20map-type%20classification%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Multi-task%20deep%20learning%20strategy%20for%20map-type%20classification%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yi%22%2C%22lastName%22%3A%22Wen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhou%22%2C%22lastName%22%3A%22Xiran%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Li%22%2C%22lastName%22%3A%22Kaiyuan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Li%22%2C%22lastName%22%3A%22Honghao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhigang%20and%22%2C%22lastName%22%3A%22Yan%22%7D%5D%2C%22abstractNote%22%3A%22The%20information%20contained%20in%20a%20map%20is%20always%20represented%20by%20text%2C%20symbols%2C%20and%20map-type.%20Among%20them%2C%20map-type%20is%20a%20critical%20element%20that%20denotes%20the%20category%20and%20theme%20of%20map%20content%2C%20which%20can%20support%20map%20content%20extraction%2C%20map%20retrieval%2C%20and%20other%20map%20data%20mining%20tasks.%20However%2C%20the%20representations%20of%20map-type%20are%20always%20so%20complex%20and%20diverse%20that%20relies%20on%20multiple%20descriptive%20labels.%20Traditional%20deep%20learning%20methods%20regarding%20map-type%20classification%20are%20developed%20by%20single%20label%2C%20which%20only%20supports%20single-task%20classification.%20This%20means%20these%20approaches%20might%20overlook%20the%20common%20features%20among%20multiple%20map-type.%20In%20this%20paper%2C%20we%20propose%20a%20framework%20of%20multi-task%20deep%20learning%20strategy%20for%20employing%20the%20state-of-the-art%20deep%20convolutional%20neural%20network%20models%2C%20including%20ResNet50%2C%20MobileNetV2%2C%20and%20Inception-v3%2C%20to%20conduct%20efficient%20multi-label%20map-type%20classification.%20Specifically%2C%20we%20develop%20the%20dedicated%20classification%20module%20and%20label%20selection%20layer%2C%20and%20integrate%20them%20into%20the%20backbone%20of%20the%20deep%20convolutional%20network%20model.%20The%20experiments%20revealed%20that%20our%20proposed%20multi-task%20classification%20strategy%20can%20achieve%20greater%20accuracy%20in%20map-type%20classification%2C%20with%20less%20processing%20time%20required%20compared%20to%20state-of-the-art%20deep%20learning%20regarding%20map-type%20classification.%20This%20proves%20that%20multi-task%20classification%20strategy%20could%20be%20competitive%20to%20recognize%20and%20discover%20the%20complex%20map-type%20information.%22%2C%22date%22%3A%222024-11-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2024.2368574%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2024.2368574%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-12-05T19%3A33%3A38Z%22%7D%7D%2C%7B%22key%22%3A%22SNVKQBNR%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Lenc%20et%20al.%22%2C%22parsedDate%22%3A%222023%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLenc%2C%20L.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-031-34111-3_16%26%23039%3B%26gt%3BTowards%20Historical%20Map%20Analysis%20Using%20Deep%20Learning%20Techniques%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Towards%20Historical%20Map%20Analysis%20Using%20Deep%20Learning%20Techniques%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ladislav%22%2C%22lastName%22%3A%22Lenc%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Josef%22%2C%22lastName%22%3A%22Baloun%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ji%5Cu0159%5Cu00ed%22%2C%22lastName%22%3A%22Mart%5Cu00ednek%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pavel%22%2C%22lastName%22%3A%22Kr%5Cu00e1l%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Ilias%22%2C%22lastName%22%3A%22Maglogiannis%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Lazaros%22%2C%22lastName%22%3A%22Iliadis%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22John%22%2C%22lastName%22%3A%22MacIntyre%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Manuel%22%2C%22lastName%22%3A%22Dominguez%22%7D%5D%2C%22abstractNote%22%3A%22This%20paper%20presents%20methods%20for%20automatic%20analysis%20of%20historical%20cadastral%20maps.%20The%20methods%20are%20developed%20as%20a%20part%20of%20a%20complex%20system%20for%20map%20digitisation%2C%20analysis%20and%20processing.%20Our%20goal%20is%20to%20detect%20important%20features%20in%20individual%20map%20sheets%20to%20allow%20their%20further%20processing%20and%20connecting%20the%20sheets%20into%20one%20seamless%20map%20that%20can%20be%20better%20presented%20online.%20We%20concentrate%20on%20detection%20of%20the%20map%20frame%2C%20which%20defines%20the%20important%20segment%20of%20the%20map%20sheet.%20Other%20crucial%20features%20are%20so-called%20inches%20that%20define%20the%20measuring%20scale%20of%20the%20map.%20We%20also%20detect%20the%20actual%20map%20area.%22%2C%22date%22%3A%222023%22%2C%22proceedingsTitle%22%3A%22Artificial%20Intelligence%20%20Applications%20%20and%20Innovations%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2F978-3-031-34111-3_16%22%2C%22ISBN%22%3A%22978-3-031-34111-3%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-031-34111-3_16%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-21T13%3A40%3A04Z%22%7D%7D%2C%7B%22key%22%3A%22SI9EXWZW%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Hu%20et%20al.%22%2C%22parsedDate%22%3A%222022-04-03%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BHu%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2021.1968407%26%23039%3B%26gt%3BEnriching%20the%20metadata%20of%20map%20images%3A%20a%20deep%20learning%20approach%20with%20GIS-based%20data%20augmentation%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Enriching%20the%20metadata%20of%20map%20images%3A%20a%20deep%20learning%20approach%20with%20GIS-based%20data%20augmentation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yingjie%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhipeng%22%2C%22lastName%22%3A%22Gui%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jimin%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Muxian%22%2C%22lastName%22%3A%22Li%22%7D%5D%2C%22abstractNote%22%3A%22Maps%20in%20the%20form%20of%20digital%20images%20are%20widely%20available%20in%20geoportals%2C%20Web%20pages%2C%20and%20other%20data%20sources.%20The%20metadata%20of%20map%20images%2C%20such%20as%20spatial%20extents%20and%20place%20names%2C%20are%20critical%20for%20their%20indexing%20and%20searching.%20However%2C%20many%20map%20images%20have%20either%20mismatched%20metadata%20or%20no%20metadata%20at%20all.%20Recent%20developments%20in%20deep%20learning%20offer%20new%20possibilities%20for%20enriching%20the%20metadata%20of%20map%20images%20via%20image-based%20information%20extraction.%20One%20major%20challenge%20of%20using%20deep%20learning%20models%20is%20that%20they%20often%20require%20large%20amounts%20of%20training%20data%20that%20have%20to%20be%20manually%20labeled.%20To%20address%20this%20challenge%2C%20this%20paper%20presents%20a%20deep%20learning%20approach%20with%20GIS-based%20data%20augmentation%20that%20can%20automatically%20generate%20labeled%20training%20map%20images%20from%20shapefiles%20using%20GIS%20operations.%20We%20utilize%20such%20an%20approach%20to%20enrich%20the%20metadata%20of%20map%20images%20by%20adding%20spatial%20extents%20and%20place%20names%20extracted%20from%20map%20images.%20We%20evaluate%20this%20GIS-based%20data%20augmentation%20approach%20by%20using%20it%20to%20train%20multiple%20deep%20learning%20models%20and%20testing%20them%20on%20two%20different%20datasets%3A%20a%20Web%20Map%20Service%20image%20dataset%20at%20the%20continental%20scale%20and%20an%20online%20map%20image%20dataset%20at%20the%20state%20scale.%20We%20then%20discuss%20the%20advantages%20and%20limitations%20of%20the%20proposed%20approach.%22%2C%22date%22%3A%222022-04-03%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2021.1968407%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2021.1968407%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A09%3A28Z%22%7D%7D%2C%7B%22key%22%3A%22VFGFFKZM%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLi%2C%20J.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Frave.ohiolink.edu%5C%2Fetdc%5C%2Fview%3Facc_num%3Dosu1650493323790506%26%23039%3B%26gt%3BComputational%20Cartographic%20Recognition%3A%20Exploring%20the%20Use%20of%20Machine%20Learning%20and%20Other%20Computational%20Approaches%20to%20Map%20Reading%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22thesis%22%2C%22title%22%3A%22Computational%20Cartographic%20Recognition%3A%20Exploring%20the%20Use%20of%20Machine%20Learning%20and%20Other%20Computational%20Approaches%20to%20Map%20Reading%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jialin%22%2C%22lastName%22%3A%22Li%22%7D%5D%2C%22abstractNote%22%3A%22Maps%20play%20an%20important%20role%20in%20providing%20geographic-related%20information%20and%20explanations%20regarding%20topics%20of%20interest.%20Maps%20are%20artifacts%20that%20are%20made%20by%20humans%20and%2C%20more%20importantly%20in%20this%20context%20for%20humans%20to%20read.%20While%20humans%20can%20develop%20map%20reading%20skills%20to%20comprehend%20qualitative%20and%20quantitative%20information%20from%20maps%2C%20can%20computers%20recognize%20information%20from%20maps%20and%20understand%20them%20as%20we%20do%3F%20In%20this%20dissertation%2C%20we%20broadly%20refer%20to%20the%20ability%20of%20computers%20to%20recognize%20the%20information%20on%20map%20images%20as%20computational%20cartographic%20recognition.%20Recent%20advances%20in%20the%20field%20of%20computer%20vision%20have%20shown%20that%20artificial%20intelligence%20and%20machine%20learning%20methods%20can%20be%20used%20to%20successfully%20recognize%20and%20classify%20a%20wide%20range%20of%20images.%20The%20dissertation%20research%20represents%20preliminary%20steps%20toward%20computational%20cartographic%20recognition%2C%20aiming%20to%20explore%20how%20these%20methods%20can%20be%20used%20to%20recognize%20information%20from%20maps.%20There%20are%20three%20research%20objectives%20achieved%20in%20the%20dissertation.%20First%2C%20we%20use%20machine%20learning%20methods%20to%20recognize%20fundamental%20cartographic%20information%20of%20maps%20including%20the%20geographic%20region%20mapped%20and%20projection%20used%20on%20the%20map.%20The%20limits%20of%20the%20methods%20are%20also%20examined%20when%20maps%20are%20presented%20with%20different%20degrees%20of%20distortions.%20Second%2C%20we%20develop%20deep%20learning-based%20models%20to%20recognize%20themes%20from%20map%20titles%20or%20legend%20titles%20of%20choropleth%20maps%20and%20classify%20the%20themes%20based%20on%20their%20semantic%20meanings.%20Themes%20are%20important%20for%20map%20users%20to%20understand%20the%20contents%20on%20maps%20because%20a%20theme%20indicates%20what%20phenomenon%20is%20presented%20on%20a%20map.%20Third%2C%20to%20explore%20whether%20computers%20can%20recognize%20spatial%20patterns%20as%20humans%20do%2C%20we%20develop%20a%20computational%20framework%20to%20recognize%20spatial%20patterns%20on%20choropleth%20maps.%20We%20also%20conduct%20a%20survey%20on%20how%20humans%20read%20spatial%20patterns%20on%20choropleth%20maps%20and%20compare%20the%20survey%20results%20with%20those%20from%20the%20computational%20models.%20The%20results%20for%20the%20three%20research%20objectives%20suggest%20that%20the%20models%20developed%20for%20the%20tasks%20are%20capable%20of%20recognizing%20information%20from%20maps%2C%20but%20there%20are%20also%20limitations%20of%20the%20models.%22%2C%22thesisType%22%3A%22Dissertation%22%2C%22university%22%3A%22The%20Ohio%20State%20University%22%2C%22date%22%3A%222022%22%2C%22language%22%3A%22en%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Frave.ohiolink.edu%5C%2Fetdc%5C%2Fview%3Facc_num%3Dosu1650493323790506%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-07-31T16%3A31%3A35Z%22%7D%7D%2C%7B%22key%22%3A%22T9YRRRC6%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Touya%20et%20al.%22%2C%22parsedDate%22%3A%222020-08%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BTouya%2C%20G.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fhal.archives-ouvertes.fr%5C%2Fhal-02873414%26%23039%3B%26gt%3BInferring%20the%20scale%20and%20content%20of%20a%20map%20using%20deep%20learning%26lt%3B%5C%2Fa%26gt%3B.%202020%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Inferring%20the%20scale%20and%20content%20of%20a%20map%20using%20deep%20learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guillaume%22%2C%22lastName%22%3A%22Touya%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22F%22%2C%22lastName%22%3A%22Brisebard%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22F%22%2C%22lastName%22%3A%22Quinton%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Azelle%22%2C%22lastName%22%3A%22Courtial%22%7D%5D%2C%22abstractNote%22%3A%22Visually%20impaired%20people%20cannot%20use%20classical%20maps%20but%20can%20learn%20to%20use%20tactile%20relief%20maps.%20These%20tactile%20maps%20are%20crucial%20at%20school%20to%20learn%20geography%20and%20history%20as%20well%20as%20the%20other%20students.%20They%20are%20produced%20manually%20by%20professional%20transcriptors%20in%20a%20very%20long%20and%20costly%20process.%20A%20platform%20able%20to%20generate%20tactile%20maps%20from%20maps%20scanned%20from%20geography%20textbooks%20could%20be%20extremely%20useful%20to%20these%20transcriptors%2C%20to%20fasten%20their%20production.%20As%20a%20first%20step%20towards%20such%20a%20platform%2C%20this%20paper%20proposes%20a%20method%20to%20infer%20the%20scale%20and%20the%20content%20of%20the%20map%20from%20its%20image.%20We%20used%20convolutional%20neural%20networks%20trained%20with%20a%20few%20hundred%20maps%20from%20French%20geography%20textbooks%2C%20and%20the%20results%20show%20promising%20results%20to%20infer%20labels%20about%20the%20content%20of%20the%20map%20%28e.g.%20%26quot%3Bthere%20are%20roads%2C%20cities%20and%20administrative%20boundaries%26quot%3B%29%2C%20and%20to%20infer%20the%20extent%20of%20the%20map%20%28e.g.%20a%20map%20of%20France%20or%20of%20Europe%29.%22%2C%22date%22%3A%222020-08%22%2C%22proceedingsTitle%22%3A%22ISPRS%20Congress%202020%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.5194%5C%2Fisprs-archives-XLIII-B4-2020-17-2020%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fhal.archives-ouvertes.fr%5C%2Fhal-02873414%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A43%3A42Z%22%7D%7D%2C%7B%22key%22%3A%22MWDTNQBD%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhou%20et%20al.%22%2C%22parsedDate%22%3A%222018%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZhou%2C%20X.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.1805.10402%26%23039%3B%26gt%3BDeep%20Convolutional%20Neural%20Networks%20for%20Map-Type%20Classification%26lt%3B%5C%2Fa%26gt%3B.%202018%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Deep%20Convolutional%20Neural%20Networks%20for%20Map-Type%20Classification%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiran%22%2C%22lastName%22%3A%22Zhou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenwen%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Samantha%22%2C%22lastName%22%3A%22Arundel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jun%22%2C%22lastName%22%3A%22Liu%22%7D%5D%2C%22abstractNote%22%3A%22Maps%20are%20an%20important%20medium%20that%20enable%20people%20to%20comprehensively%20understand%20the%20configuration%20of%20cultural%20activities%20and%20natural%20elements%20over%20different%20times%20and%20places.%20Although%20massive%20maps%20are%20available%20in%20the%20digital%20era%2C%20how%20to%20effectively%20and%20accurately%20access%20the%20required%20map%20remains%20a%20challenge%20today.%20Previous%20works%20partially%20related%20to%20map-type%20classification%20mainly%20focused%20on%20map%20comparison%20and%20map%20matching%20at%20the%20local%20scale.%20The%20features%20derived%20from%20local%20map%20areas%20might%20be%20insufficient%20to%20characterize%20map%20content.%20To%20facilitate%20establishing%20an%20automatic%20approach%20for%20accessing%20the%20needed%20map%2C%20this%20paper%20reports%20our%20investigation%20into%20using%20deep%20learning%20techniques%20to%20recognize%20seven%20types%20of%20map%2C%20including%20topographic%20map%2C%20terrain%20map%2C%20physical%20map%2C%20urban%20scene%20map%2C%20the%20National%20Map%2C%203D%20map%2C%20nighttime%20map%2C%20orthophoto%20map%2C%20and%20land%20cover%20classification%20map.%20Experimental%20results%20show%20that%20the%20state-of-the-art%20deep%20convolutional%20neural%20networks%20can%20support%20automatic%20map-type%20classification.%20Additionally%2C%20the%20classification%20accuracy%20varies%20according%20to%20different%20map-types.%20We%20hope%20our%20work%20can%20contribute%20to%20the%20implementation%20of%20deep%20learning%20techniques%20in%20cartographical%20community%20and%20advance%20the%20progress%20of%20Geographical%20Artificial%20Intelligence%20%28GeoAI%29.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22%22%2C%22date%22%3A%222018%22%2C%22DOI%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.1805.10402%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A56%3A45Z%22%7D%7D%5D%7D
Kirsanova, S. et al. Detecting Legend Items on Historical Maps Using GPT-4o with In-Context Learning. 2025
Xiaoying, Q. et al. Semantic organization for historical maps: Classification, representation, association. 2025
Wen, Y. et al. Multi-task deep learning strategy for map-type classification. 2024
Lenc, L. et al. Towards Historical Map Analysis Using Deep Learning Techniques. 2023
Touya, G. et al. Inferring the scale and content of a map using deep learning. 2020
Zhou, X. et al. Deep Convolutional Neural Networks for Map-Type Classification. 2018
Design Analysis
5447768
design analysis
1
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22LN4AAZX4%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wang%20et%20al.%22%2C%22parsedDate%22%3A%222025-07-04%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWang%2C%20C.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2024.2306827%26%23039%3B%26gt%3BTransMI%3A%20a%20transfer-learning%20method%20for%20generalized%20map%20information%20evaluation%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22TransMI%3A%20a%20transfer-learning%20method%20for%20generalized%20map%20information%20evaluation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chenglong%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tianjiao%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yuqian%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mengjun%22%2C%22lastName%22%3A%22Kang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shiliang%22%2C%22lastName%22%3A%22Su%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Binbo%22%2C%22lastName%22%3A%22Li%22%7D%5D%2C%22abstractNote%22%3A%22Information%20Theory%20of%20Cartography%20plays%20an%20important%20role%20in%20guiding%20map%20design%2C%20generalization%2C%20and%20evaluation%2C%20and%20measurement%20of%20map%20information%20is%20the%20most%20basic%20topic%20of%20this%20theory.%20However%2C%20there%20are%20many%20problems%20in%20current%20measurement%20methods%2C%20and%20there%20is%20a%20long%20way%20to%20go%20to%20form%20a%20theoretically%20rigorous%20algorithm%20that%20can%20effectively%20depict%20spatial%20information%20and%20comprehensively%20consider%20the%20feeling%20of%20map%20readers.%20Luckily%2C%20we%20can%20now%20propose%20an%20evaluation%20metric%20that%20exhibits%20a%20certain%20correlation%20with%20map%20information%20based%20on%20deep%20learning%20to%20benefit%20actual%20cartography%2C%20and%20we%20term%20it%20as%20generalized%20map%20information%20evaluation%20to%20demonstrate%20differentiation.%20Specifically%2C%20this%20paper%20first%20constructs%20a%20subjective%20data%20set%20to%20support%20the%20supervised%20learning%20paradigm.%20Also%2C%20considering%20the%20difficulty%20of%20large-scale%20subjective%20data%20set%20collection%2C%20this%20paper%20proposes%20a%20Transfer-learning%20method%20for%20generalized%20Map%20Information%20evaluation%20%28TransMI%29.%20Technically%2C%20a%20Siamese%20Network%20is%20pre-trained%20to%20explicitly%20acquire%20prior%20knowledge%20about%20the%20reasons%20for%20changes%20to%20mapped%20information.%20On%20this%20basis%2C%20one%20branch%20of%20the%20network%20is%20extracted%20and%20fine-tuned%20on%20the%20subjective%20data%20set%20to%20achieve%20the%20goal%20of%20predicting%20the%20quality%20of%20generalized%20map%20information.%20The%20results%20and%20the%20analysis%20of%20ablation%20studies%20prove%20the%20feasibility%20of%20our%20method.%22%2C%22date%22%3A%222025-07-04%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2024.2306827%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2024.2306827%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-08-13T13%3A59%3A18Z%22%7D%7D%2C%7B%22key%22%3A%22V6PSV54M%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wang%20et%20al.%22%2C%22parsedDate%22%3A%222024-11-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWang%2C%20Z.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2024.2392795%26%23039%3B%26gt%3BThe%20assessment%20of%20wemaps%20audit%20requirements%20based%20on%20deep%20learning%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22The%20assessment%20of%20wemaps%20audit%20requirements%20based%20on%20deep%20learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhuo%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yan%22%2C%22lastName%22%3A%22Haowen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wang%22%2C%22lastName%22%3A%22Xiaolong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wang%22%2C%22lastName%22%3A%22Bingxuan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shen%20and%22%2C%22lastName%22%3A%22Ying%22%7D%5D%2C%22abstractNote%22%3A%22As%20a%20specialized%20map%20product%2C%20Wemaps%20must%20comply%20with%20relevant%20laws%20and%20regulations.%20Map%20audit%20plays%20a%20crucial%20role%20in%20ensuring%20map%20quality%20by%20preventing%20the%20production%20and%20dissemination%20of%20problem%20maps%2C%20as%20well%20as%20safeguarding%20national%20sovereignty%2C%20security%2C%20and%20interests.%20The%20user%20base%20for%20Wemaps%20is%20diverse%2C%20encompassing%20various%20types%20of%20maps%2C%20vast%20amounts%20of%20map%20data%2C%20and%20high%20expectations%20for%20timely%20dissemination.%20However%2C%20the%20current%20map%20audit%20process%20is%20inefficient%20and%20burdensome%2C%20failing%20to%20meet%20the%20specific%20needs%20of%20Wemaps%20audits.%20The%20key%20to%20solving%20this%20problem%20lies%20in%20the%20ability%20to%20automate%20and%20rapidly%20assess%20the%20audit%20requirements%20of%20Wemaps%2C%20approving%20those%20that%20require%20audit%20and%20promptly%20releasing%20those%20that%20do%20not.%20This%20study%20aims%20to%20establish%20an%20automated%20Wemaps%20audit%20assessment%20model%20using%20convolutional%20neural%20networks%20and%20transfer%20learning%20methods.%20By%20doing%20so%2C%20the%20burden%20of%20map%20audit%20can%20be%20reduced%2C%20and%20dissemination%20efficiency%20can%20be%20improved.%20The%20main%20contributions%20of%20this%20study%20are%20as%20follows%3A%20%281%29%20Establishment%20of%20a%20dataset%20for%20assessing%20Wemaps%20audit%20requirements.%20%282%29%20Utilization%20of%20VGG16%20and%20ResNet50%20neural%20network%20models%20for%20assessing%20Wemaps%20audit%20requirements%3B%20%283%29%20Development%20of%20an%20optimal%20Wemaps%20audit%20assessment%20model%20through%20various%20experiments%20and%20training%20methods.%20%284%29%20Analysis%20of%20factors%20influencing%20audit%20assessments%20based%20on%20measurement%20indicators%20and%20visualized%20results%20of%20the%20model.%20The%20experiments%20demonstrate%20that%20this%20method%20achieves%20high%20accuracy%20and%20can%20provide%20assessment%20services%20for%20public%20map%20audit%20requirements.%20A%20dataset%20for%20assessing%20Wemaps%20audit%20requirements%20is%20constructedWemaps%20audit%20requirements%20are%20assessed%20using%20convolutional%20neural%20networksVisual%20variables%20Influencing%20the%20assessment%20of%20Wemaps%20audit%20requirements%20are%20analyzed%20A%20dataset%20for%20assessing%20Wemaps%20audit%20requirements%20is%20constructed%20Wemaps%20audit%20requirements%20are%20assessed%20using%20convolutional%20neural%20networks%20Visual%20variables%20Influencing%20the%20assessment%20of%20Wemaps%20audit%20requirements%20are%20analyzed%22%2C%22date%22%3A%222024-11-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2024.2392795%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2024.2392795%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-12-05T19%3A33%3A12Z%22%7D%7D%2C%7B%22key%22%3A%22WAX6L5VA%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Xi%20et%20al.%22%2C%22parsedDate%22%3A%222023-09-03%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BXi%2C%20D.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2172081%26%23039%3B%26gt%3BResearch%20on%20map%20emotional%20semantics%20using%20deep%20learning%20approach%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Research%20on%20map%20emotional%20semantics%20using%20deep%20learning%20approach%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Daping%22%2C%22lastName%22%3A%22Xi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xini%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lin%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nai%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yanzhu%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Han%22%2C%22lastName%22%3A%22Jiang%22%7D%5D%2C%22abstractNote%22%3A%22The%20main%20purpose%20of%20the%20research%20on%20map%20emotional%20semantics%20is%20to%20describe%20and%20express%20the%20emotional%20responses%20caused%20by%20people%20observing%20images%20through%20computer%20technology.%20Nowadays%2C%20map%20application%20scenarios%20tend%20to%20be%20diversified%2C%20and%20the%20increasing%20demand%20for%20emotional%20information%20of%20map%20users%20bring%20new%20challenges%20for%20cartography.%20However%2C%20the%20lack%20of%20evaluation%20of%20emotions%20in%20the%20traditional%20map%20drawing%20process%20makes%20it%20difficult%20for%20the%20resulting%20maps%20to%20reach%20emotional%20resonance%20with%20map%20users.%20The%20core%20of%20solving%20this%20problem%20is%20to%20quantify%20the%20emotional%20semantics%20of%20maps%2C%20it%20can%20help%20mapmakers%20to%20better%20understand%20map%20emotions%20and%20improve%20user%20satisfaction.%20This%20paper%20aims%20to%20perform%20the%20quantification%20of%20map%20emotional%20semantics%20by%20applying%20transfer%20learning%20methods%20and%20the%20efficient%20computational%20power%20of%20convolutional%20neural%20networks%20%28CNN%29%20to%20establish%20the%20correspondence%20between%20visual%20features%20and%20emotions.%20The%20main%20contributions%20of%20this%20paper%20are%20as%20follows%3A%20%281%29%20a%20Map%20Sentiment%20Dataset%20containing%20five%20discrete%20emotion%20categories%3B%20%282%29%20three%20different%20CNNs%20%28VGG16%2C%20VGG19%2C%20and%20InceptionV3%29%20are%20applied%20for%20map%20sentiment%20classification%20task%20and%20evaluated%20by%20accuracy%20performance%3B%20%283%29%20six%20different%20parameter%20combinations%20to%20conduct%20experiments%20that%20would%20determine%20the%20best%20combination%20of%20learning%20rate%20and%20batch%20size%3B%20and%20%284%29%20the%20analysis%20of%20visual%20variables%20that%20affect%20the%20sentiment%20of%20a%20map%20according%20to%20the%20chart%20and%20visualization%20results.%20The%20experimental%20results%20reveal%20that%20the%20proposed%20method%20has%20good%20accuracy%20performance%20%28around%2088%25%29%20and%20that%20the%20emotional%20semantics%20of%20maps%20have%20some%20general%20rules.%20A%20Map%20Sentiment%20Dataset%20with%20five%20discrete%20emotions%20is%20constructedMap%20emotional%20semantics%20are%20classified%20by%20deep%20learning%20approachesVisual%20variables%20Influencing%20map%20sentiment%20are%20analyzed.%20A%20Map%20Sentiment%20Dataset%20with%20five%20discrete%20emotions%20is%20constructed%20Map%20emotional%20semantics%20are%20classified%20by%20deep%20learning%20approaches%20Visual%20variables%20Influencing%20map%20sentiment%20are%20analyzed.%22%2C%22date%22%3A%222023-09-03%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2023.2172081%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2172081%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-18T17%3A10%3A23Z%22%7D%7D%2C%7B%22key%22%3A%22XZ3U99QR%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Keskin%20and%20Kettunen%22%2C%22parsedDate%22%3A%222023-05-04%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKeskin%2C%20M.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F23729333.2022.2150379%26%23039%3B%26gt%3BPotential%20of%20eye-tracking%20for%20interactive%20geovisual%20exploration%20aided%20by%20machine%20learning%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Potential%20of%20eye-tracking%20for%20interactive%20geovisual%20exploration%20aided%20by%20machine%20learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Merve%22%2C%22lastName%22%3A%22Keskin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pyry%22%2C%22lastName%22%3A%22Kettunen%22%7D%5D%2C%22abstractNote%22%3A%22This%20review%20article%20collects%20knowledge%20on%20the%20use%20of%20eye-tracking%20and%20machine%20learning%20methods%20for%20application%20in%20automated%20and%20interactive%20geovisualization%20systems.%20Our%20focus%20is%20on%20exploratory%20reading%20of%20geovisualizations%20%28abbr.%20geoexploration%29%20and%20on%20machine%20learning%20tools%20for%20exploring%20vector%20geospatial%20data.%20We%20particularly%20consider%20geospatial%20data%20that%20is%20unlabeled%2C%20confusing%20or%20unknown%20to%20the%20user.%20The%20contribution%20of%20the%20article%20is%20in%20%28i%29%20defining%20principles%20and%20requirements%20for%20enabling%20user%20interaction%20with%20the%20geovisualizations%20that%20learn%20from%20and%20adapt%20to%20user%20behavior%2C%20and%20%28ii%29%20reviewing%20the%20use%20of%20eye%20tracking%20and%20machine%20learning%20to%20design%20gaze-aware%20interactive%20map%20systems%20%28GAIMS%29.%20In%20this%20context%2C%20we%20review%20literature%20on%20%28i%29%20human-computer%20interaction%20%28HCI%29%20design%20for%20exploring%20geospatial%20data%2C%20%28ii%29%20eye%20tracking%20for%20cartographic%20user%20experience%2C%20and%20%28iii%29%20machine%20learning%20applied%20to%20vector%20geospatial%20data.%20The%20review%20indicates%20that%20combining%20eye%20tracking%20and%20machine%20learning%20is%20promising%20in%20terms%20of%20assisting%20geoexploration.%20However%2C%20more%20research%20is%20needed%20on%20eye%20tracking%20for%20interaction%20and%20personalization%20of%20cartographic%5C%2Fmap%20interfaces%20as%20well%20as%20on%20machine%20learning%20for%20detection%20of%20geometries%20in%20vector%20format.%22%2C%22date%22%3A%222023-05-04%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F23729333.2022.2150379%22%2C%22ISSN%22%3A%222372-9333%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F23729333.2022.2150379%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-18T17%3A10%3A43Z%22%7D%7D%5D%7D
Wang, C. et al. TransMI: a transfer-learning method for generalized map information evaluation. 2025
Wang, Z. et al. The assessment of wemaps audit requirements based on deep learning. 2024
Xi, D. et al. Research on map emotional semantics using deep learning approach. 2023
Keskin, M. et al. Potential of eye-tracking for interactive geovisual exploration aided by machine learning. 2023
Text-to-Map
5447768
text-to-map
1
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22ZG9VIKRU%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yang%20et%20al.%22%2C%22parsedDate%22%3A%222025%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYang%2C%20N.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2025.2531055%26%23039%3B%26gt%3BMapColorAI%3A%20designing%20contextually%20relevant%20choropleth%20map%20color%20schemes%20using%20a%20large%20language%20model%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22MapColorAI%3A%20designing%20contextually%20relevant%20choropleth%20map%20color%20schemes%20using%20a%20large%20language%20model%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nai%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yijie%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhiwei%22%2C%22lastName%22%3A%22Wei%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fan%22%2C%22lastName%22%3A%22Wu%22%7D%5D%2C%22abstractNote%22%3A%22Choropleth%20maps%20are%20fundamental%20tools%20for%20geographic%20data%20analysis%2C%20primarily%20relying%20on%20color%20to%20convey%20information.%20Consequently%2C%20the%20design%20of%20their%20color%20schemes%20is%20important%20in%20choropleth%20map%20production.%20However%2C%20the%20traditional%20coloring%20methods%20offered%20by%20GIS%20tools%20such%20as%20ArcGIS%20and%20QGIS%20are%20not%20user-friendly%20enough%20for%20nonprofessionals.%20These%20tools%20provide%20numerous%20color%20schemes%2C%20making%20selection%20difficult%2C%20and%20cannot%20also%20easily%20fulfill%20personalized%20coloring%20needs%2C%20such%20as%20requests%20for%20%5Cu201csummer-like%5Cu201d%20map%20colors.%20To%20address%20these%20shortcomings%2C%20we%20develop%20a%20novel%20system%20that%20leverages%20a%20large%20language%20model%20and%20map%20color%20design%20principles%20to%20generate%20contextually%20relevant%20and%20user-aligned%20choropleth%20map%20color%20schemes.%20The%20system%20follows%20a%20three-stage%20process%3A%20Data%20processing%2C%20which%20provides%20an%20overview%20and%20classification%20of%20the%20data%3B%20Color%20Concept%20Design%2C%20where%20color%20theme%20and%20mode%20are%20conceptualized%20based%20on%20data%20characteristics%20and%20user%20intentions%3B%20and%20Color%20Scheme%20Design%2C%20where%20specific%20colors%20are%20assigned%20to%20classes.%20Our%20system%20incorporates%20an%20interactive%20interface%20for%20choropleth%20map%20color%20design%20and%20allows%20users%20to%20customize%20color%20choices%20flexibly.%20Through%20user%20studies%20and%20evaluations%2C%20the%20system%20demonstrates%20acceptable%20usability%2C%20accuracy%2C%20and%20flexibility%2C%20with%20users%20highlighting%20its%20efficiency%20and%20ease%20of%20use.%22%2C%22date%22%3A%222025%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2025.2531055%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2025.2531055%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-07-31T16%3A48%3A42Z%22%7D%7D%2C%7B%22key%22%3A%22IK2K36IB%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhang%20et%20al.%22%2C%22parsedDate%22%3A%222024-11-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZhang%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2024.2404868%26%23039%3B%26gt%3BMapGPT%3A%20an%20autonomous%20framework%20for%20mapping%20by%20integrating%20large%20language%20model%20and%20cartographic%20tools%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22MapGPT%3A%20an%20autonomous%20framework%20for%20mapping%20by%20integrating%20large%20language%20model%20and%20cartographic%20tools%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yifan%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22He%20%2CZhengting%22%2C%22lastName%22%3A%22%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Li%20%2CJingxuan%22%2C%22lastName%22%3A%22%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lin%20%2CJianfeng%22%2C%22lastName%22%3A%22%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guan%20%2CQingfeng%22%2C%22lastName%22%3A%22%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenhao%22%2C%22lastName%22%3A%22and%20Yu%22%7D%5D%2C%22abstractNote%22%3A%22The%20mapping%20process%20generally%20involves%20intricate%20operations%2C%20such%20as%20symbol%20design%2C%20layout%20design%2C%20and%20text%20annotation%2C%20demanding%20a%20high%20level%20of%20professional%20expertise.%20The%20high%20requirement%20for%20map%20producers%20hinders%20the%20promotion%20and%20widespread%20adoption%20of%20mapping.%20Consequently%2C%20researchers%20are%20concentrating%20on%20techniques%20to%20automate%20and%20enhance%20the%20intelligence%20of%20the%20mapping%20process.%20For%20example%2C%20some%20studies%20attempt%20to%20train%20deep%20learning%20models%20for%20mapping%2C%20including%20methods%20like%20map%20style%20transfer.%20However%2C%20these%20approaches%20typically%20treat%20the%20entire%20map%20as%20a%20global%20input%20and%20generate%20a%20new%20map%20as%20output%2C%20lacking%20the%20flexibility%20to%20consider%20and%20control%20detailed%20elements%20within%20a%20map.%20Therefore%2C%20in%20this%20paper%2C%20we%20propose%20a%20large%20language%20model-based%20intelligent%20mapping%20framework%2C%20termed%20MapGPT%2C%20which%20can%20be%20used%20for%20mapping%20by%20considering%20the%20map%20as%20an%20integration%20of%20various%20map%20elements.%20Specifically%2C%20multiple%20professional%20mapping%20tools%20are%20designed%20in%20MapGPT%2C%20and%20each%20tool%20is%20designed%20to%20control%20a%20corresponding%20map%20element.%20With%20these%20tools%2C%20a%20large%20language%20model%20is%20used%20to%20first%20understand%20the%20demand%20of%20users%20based%20on%20mere%20natural%20language%20descriptions%2C%20and%20subsequently%20automatically%20invoke%20appropriate%20tools%20in%20sequence%20to%20generate%20a%20map.%20Furthermore%2C%20by%20utilizing%20a%20memory%20component%20to%20store%20interaction%20information%2C%20users%20can%20interact%20with%20MapGPT%20through%20conversation%20to%20adjust%20map%20elements%20such%20as%20color%20and%20position.%20In%20conclusion%2C%20MapGPT%20offers%20user-friendly%20mapping%20experience%2C%20showing%20potential%20to%20be%20a%20mapping%20assistant%20for%20professional%20map%20producers.%20A%20comprehensive%20demonstration%20of%20this%20framework%20is%20provided%20in%20a%20visual%20case%20study%20video%2C%20accessible%20at%20https%3A%5C%2F%5C%2Fgithub.com%5C%2FAGI-GIS%5C%2FMapGPT.%22%2C%22date%22%3A%222024-11-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2024.2404868%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2024.2404868%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-03-20T22%3A45%3A55Z%22%7D%7D%2C%7B%22key%22%3A%22YZ5G35PB%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Chen%20et%20al.%22%2C%22parsedDate%22%3A%222024-08%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BChen%2C%20J.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Faclanthology.org%5C%2F2024.acl-long.529%5C%2F%26%23039%3B%26gt%3BMapGPT%3A%20Map-Guided%20Prompting%20with%20Adaptive%20Path%20Planning%20for%20Vision-and-Language%20Navigation%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22MapGPT%3A%20Map-Guided%20Prompting%20with%20Adaptive%20Path%20Planning%20for%20Vision-and-Language%20Navigation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiaqi%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bingqian%22%2C%22lastName%22%3A%22Lin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ran%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhenhua%22%2C%22lastName%22%3A%22Chai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiaodan%22%2C%22lastName%22%3A%22Liang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kwan-Yee%22%2C%22lastName%22%3A%22Wong%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Lun-Wei%22%2C%22lastName%22%3A%22Ku%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Andre%22%2C%22lastName%22%3A%22Martins%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Vivek%22%2C%22lastName%22%3A%22Srikumar%22%7D%5D%2C%22abstractNote%22%3A%22Embodied%20agents%20equipped%20with%20GPT%20as%20their%20brain%20have%20exhibited%20extraordinary%20decision-making%20and%20generalization%20abilities%20across%20various%20tasks.%20However%2C%20existing%20zero-shot%20agents%20for%20vision-and-language%20navigation%20%28VLN%29%20only%20prompt%20the%20GPT-4%20to%20select%20potential%20locations%20within%20localized%20environments%2C%20without%20constructing%20an%20effective%20%5Cu201cglobal-view%5Cu201d%20for%20the%20agent%20to%20understand%20the%20overall%20environment.%20In%20this%20work%2C%20we%20present%20a%20novel%20%2A%2Amap%2A%2A-guided%20%2A%2AGPT%2A%2A-based%20agent%2C%20dubbed%20%2A%2AMapGPT%2A%2A%2C%20which%20introduces%20an%20online%20linguistic-formed%20map%20to%20encourage%20the%20global%20exploration.%20Specifically%2C%20we%20build%20an%20online%20map%20and%20incorporate%20it%20into%20the%20prompts%20that%20include%20node%20information%20and%20topological%20relationships%2C%20to%20help%20GPT%20understand%20the%20spatial%20environment.%20Benefiting%20from%20this%20design%2C%20we%20further%20propose%20an%20adaptive%20planning%20mechanism%20to%20assist%20the%20agent%20in%20performing%20multi-step%20path%20planning%20based%20on%20a%20map%2C%20systematically%20exploring%20multiple%20candidate%20nodes%20or%20sub-goals%20step%20by%20step.%20Extensive%20experiments%20demonstrate%20that%20our%20MapGPT%20is%20applicable%20to%20both%20GPT-4%20and%20GPT-4V%2C%20achieving%20state-of-the-art%20zero-shot%20performance%20on%20the%20R2R%20and%20REVERIE%20simultaneously%20%28%5C%5Ctextasciitilde10%25%20and%20%5C%5Ctextasciitilde12%25%20improvements%20in%20SR%29%2C%20and%20showcasing%20the%20newly%20emergent%20global%20thinking%20and%20path%20planning%20abilities%20of%20the%20GPT.%22%2C%22date%22%3A%222024-08%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%2062nd%20Annual%20Meeting%20of%20the%20Association%20for%20Computational%20Linguistics%20%28Volume%201%3A%20Long%20Papers%29%22%2C%22conferenceName%22%3A%22ACL%202024%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.18653%5C%2Fv1%5C%2F2024.acl-long.529%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Faclanthology.org%5C%2F2024.acl-long.529%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-03-20T22%3A47%3A29Z%22%7D%7D%2C%7B%22key%22%3A%22SVAW87D7%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhang%20et%20al.%22%2C%22parsedDate%22%3A%222024-07-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZhang%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1569843224003303%26%23039%3B%26gt%3BGeoGPT%3A%20An%20assistant%20for%20understanding%20and%20processing%20geospatial%20tasks%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22GeoGPT%3A%20An%20assistant%20for%20understanding%20and%20processing%20geospatial%20tasks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yifan%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cheng%22%2C%22lastName%22%3A%22Wei%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhengting%22%2C%22lastName%22%3A%22He%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenhao%22%2C%22lastName%22%3A%22Yu%22%7D%5D%2C%22abstractNote%22%3A%22Decision-makers%20in%20GIS%20often%20need%20to%20combine%20multiple%20spatial%20algorithms%20and%20operations%20to%20solve%20geospatial%20tasks.%20While%20professionals%20can%20understand%20and%20solve%20these%20tasks%20by%20using%20GIS%20tools%20sequentially%2C%20developing%20workflows%20for%20various%20tasks%20can%20be%20inefficient%2C%20as%20even%20slight%20differences%20in%20tasks%20require%20corresponding%20adjustments%20in%20the%20workflow.%20Recently%2C%20large%20language%20models%20%28e.g.%2C%20ChatGPT%29%20presented%20a%20strong%20performance%20in%20semantic%20understanding%20and%20reasoning.%20Especially%2C%20AutoGPT%20can%20further%20extend%20the%20capabilities%20of%20large%20language%20models%20%28LLMs%29%20by%20automatically%20reasoning%20and%20calling%20externally%20defined%20tools.%20Inspired%20by%20these%20studies%2C%20we%20attempt%20to%20increase%20the%20efficiency%20of%20developing%20workflows%20for%20handling%20geoprocessing%20tasks%20by%20integrating%20the%20semantic%20understanding%20ability%20inherent%20in%20LLMs%20with%20mature%20tools%20within%20the%20GIS%20community.%20Specifically%2C%20we%20develop%20a%20new%20framework%20called%20GeoGPT%20that%20can%20conduct%20geospatial%20data%20collection%2C%20processing%2C%20and%20analysis%20in%20an%20autonomous%20manner.%20In%20this%20framework%2C%20a%20LLM%20is%20used%20to%20understand%20the%20demands%20of%20users%2C%20and%20then%20think%2C%20plan%2C%20and%20execute%20defined%20GIS%20tools%20sequentially%20to%20output%20final%20effective%20results.%20In%20this%20process%2C%20our%20framework%20is%20user-friendly%2C%20accepting%20natural%20language%20instructions%20as%20input%20and%20adapting%20to%20various%20geospatial%20tasks%2C%20which%20can%20serve%20as%20an%20assistant%20for%20GIS%20professionals.%20A%20systemic%20evaluation%20and%20several%20cases%2C%20including%20geospatial%20data%20crawling%2C%20spatial%20query%2C%20facility%20siting%2C%20and%20mapping%2C%20validate%20the%20effectiveness%20of%20our%20framework.%20Though%20limited%20cases%20are%20presented%20in%20this%20paper%2C%20GeoGPT%20can%20be%20further%20extended%20to%20various%20tasks%20by%20equipping%20with%20more%20GIS%20tools%2C%20and%20we%20think%20the%20paradigm%20of%20%5Cu201cfoundational%20plus%20professional%5Cu201d%20implied%20in%20GeoGPT%20provides%20an%20effective%20way%20to%20develop%20next-generation%20GIS%20in%20this%20era%20of%20large%20foundation%20models.%22%2C%22date%22%3A%222024-07-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.jag.2024.103976%22%2C%22ISSN%22%3A%221569-8432%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1569843224003303%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-11-15T18%3A46%3A54Z%22%7D%7D%2C%7B%22key%22%3A%223NC982LV%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Dunkel%20et%20al.%22%2C%22parsedDate%22%3A%222024-03-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BDunkel%2C%20A.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs42489-024-00159-9%26%23039%3B%26gt%3BGenerative%20Text-to-Image%20Diffusion%20for%20Automated%20Map%20Production%20Based%20on%20Geosocial%20Media%20Data%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Generative%20Text-to-Image%20Diffusion%20for%20Automated%20Map%20Production%20Based%20on%20Geosocial%20Media%20Data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Alexander%22%2C%22lastName%22%3A%22Dunkel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dirk%22%2C%22lastName%22%3A%22Burghardt%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Madalina%22%2C%22lastName%22%3A%22Gugulica%22%7D%5D%2C%22abstractNote%22%3A%22The%20state%20of%20generative%20AI%20has%20taken%20a%20leap%20forward%20with%20the%20availability%20of%20open%20source%20diffusion%20models.%20Here%2C%20we%20demonstrate%20an%20integrated%20workflow%20that%20uses%20text-to-image%20stable%20diffusion%20at%20its%20core%20to%20automatically%20generate%20icon%20maps%20such%20as%20for%20the%20area%20of%20the%20Gro%5Cu00dfer%20Garten%2C%20a%20tourist%20hotspot%20in%20Dresden%2C%20Germany.%20The%20workflow%20is%20based%20on%20the%20aggregation%20of%20geosocial%20media%20data%20from%20Twitter%2C%20Flickr%2C%20Instagram%20and%20iNaturalist.%20This%20data%20are%20used%20to%20create%20diffusion%20prompts%20to%20account%20for%20the%20collective%20attribution%20of%20meaning%20and%20importance%20by%20the%20population%20in%20map%20generation.%20Specifically%2C%20we%20contribute%20methods%20for%20simplifying%20the%20variety%20of%20contexts%20communicated%20on%20social%20media%20through%20spatial%20clustering%20and%20semantic%20filtering%20for%20use%20in%20prompts%2C%20and%20then%20demonstrate%20how%20this%20human-contributed%20baseline%20data%20can%20be%20used%20in%20prompt%20engineering%20to%20automatically%20generate%20icon%20maps.%20Replacing%20labels%20on%20maps%20with%20expressive%20graphics%20has%20the%20general%20advantage%20of%20reaching%20a%20broader%20audience%2C%20such%20as%20children%20and%20other%20illiterate%20groups.%20For%20example%2C%20the%20resulting%20maps%20can%20be%20used%20to%20inform%20tourists%20of%20all%20backgrounds%20about%20important%20activities%2C%20points%20of%20interest%2C%20and%20landmarks%20without%20the%20need%20for%20translation.%20Several%20challenges%20are%20identified%20and%20possible%20future%20optimizations%20are%20described%20for%20different%20steps%20of%20the%20process.%20The%20code%20and%20data%20are%20fully%20provided%20and%20shared%20in%20several%20Jupyter%20notebooks%2C%20allowing%20for%20transparent%20replication%20of%20the%20workflow%20and%20adoption%20to%20other%20domains%20or%20datasets.%22%2C%22date%22%3A%222024-03-01%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2Fs42489-024-00159-9%22%2C%22ISSN%22%3A%222524-4965%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs42489-024-00159-9%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-28T18%3A58%3A45Z%22%7D%7D%2C%7B%22key%22%3A%22H8VEBXGN%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Tao%20and%20Xu%22%2C%22parsedDate%22%3A%222023-07%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BTao%2C%20R.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F12%5C%2F7%5C%2F284%26%23039%3B%26gt%3BMapping%20with%20ChatGPT%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Mapping%20with%20ChatGPT%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ran%22%2C%22lastName%22%3A%22Tao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jinwen%22%2C%22lastName%22%3A%22Xu%22%7D%5D%2C%22abstractNote%22%3A%22The%20emergence%20and%20rapid%20advancement%20of%20large%20language%20models%20%28LLMs%29%2C%20represented%20by%20OpenAI%5Cu2019s%20Generative%20Pre-trained%20Transformer%20%28GPT%29%2C%20has%20brought%20up%20new%20opportunities%20across%20various%20industries%20and%20disciplines.%20These%20cutting-edge%20technologies%20are%20transforming%20the%20way%20we%20interact%20with%20information%2C%20communicate%2C%20and%20solve%20complex%20problems.%20We%20conducted%20a%20pilot%20study%20exploring%20making%20maps%20with%20ChatGPT%2C%20a%20popular%20artificial%20intelligence%20%28AI%29%20chatbot.%20Specifically%2C%20we%20tested%20designing%20thematic%20maps%20using%20given%20or%20public%20geospatial%20data%2C%20as%20well%20as%20creating%20mental%20maps%20purely%20using%20textual%20descriptions%20of%20geographic%20space.%20We%20conclude%20that%20ChatGPT%20provides%20a%20useful%20alternative%20solution%20for%20mapping%20given%20its%20unique%20advantages%2C%20such%20as%20lowering%20the%20barrier%20to%20producing%20maps%2C%20boosting%20the%20efficiency%20of%20massive%20map%20production%2C%20and%20understanding%20geographical%20space%20with%20its%20spatial%20thinking%20capability.%20However%2C%20mapping%20with%20ChatGPT%20still%20has%20limitations%20at%20the%20current%20stage%2C%20such%20as%20its%20unequal%20benefits%20for%20different%20users%20and%20dependence%20on%20user%20intervention%20for%20quality%20control.%22%2C%22date%22%3A%222023%5C%2F7%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi12070284%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F12%5C%2F7%5C%2F284%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-20T19%3A39%3A25Z%22%7D%7D%5D%7D
Zhang, Y. et al. GeoGPT: An assistant for understanding and processing geospatial tasks. 2024
Dunkel, A. et al. Generative Text-to-Image Diffusion for Automated Map Production Based on Geosocial Media Data. 2024
Tao, R. et al. Mapping with ChatGPT. 2023
Relief Shading
5447768
relief shading
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22Y3ACMU6W%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wang%20et%20al.%22%2C%22parsedDate%22%3A%222025-12-31%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWang%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F10106049.2025.2459099%26%23039%3B%26gt%3BConstruction%20of%20a%20neural%20network%20model%20for%20small-scale%20shaded%20relief%20maps%20constrained%20by%20topographic%20feature%20lines%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Construction%20of%20a%20neural%20network%20model%20for%20small-scale%20shaded%20relief%20maps%20constrained%20by%20topographic%20feature%20lines%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yue%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenping%22%2C%22lastName%22%3A%22Jiang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Han%22%2C%22lastName%22%3A%22Jiang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Danfeng%22%2C%22lastName%22%3A%22Dai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peiyang%22%2C%22lastName%22%3A%22Ma%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yuan%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhizhi%22%2C%22lastName%22%3A%22Wang%22%7D%5D%2C%22abstractNote%22%3A%22Traditional%20manual%20methods%20for%20generating%20shaded%20relief%20maps%20can%20effectively%20highlight%20major%20topographic%20structures%20but%20are%20time-consuming%20and%20require%20professional%20skills.%20Analytical%20shading%20methods%20are%20faster%20but%20often%20lead%20to%20maps%20overloaded%20with%20terrain%20details%2C%20obscuring%20key%20topographic%20features%2C%20especially%20in%20Small-Scale%20Shaded%20Relief%20Maps%20%28SSSR-Maps%29.%20This%20study%20focuses%20on%20the%20relief%20shading%20of%20alpine%20canyon%20terrain%2C%20introduces%20topographic%20feature%20lines%20%28TFLs%29%20as%20constraints%2C%20and%20constructs%20a%20neural%20network%20model%20based%20on%20Pix2pixHD%2C%20namely%2C%20TFLC-CGAN.%20Two%20generation%20methods%2C%20TFLC-CGAN-E%20and%20TFLC-CGAN-M%2C%20are%20proposed%20and%20compared.%20Experimental%20results%20show%20that%20TFLC-CGAN%20can%20generate%20SSSR-Maps%20with%20manual%20shading%20styles%2C%20simplifying%20terrain%20while%20preserving%20key%20features.%20TFLC-CGAN-E%20adapts%20better%20to%20sharply%20reduced%20TFL%20density%2C%20while%20TFLC-CGAN-M%20excels%20in%20feature%20preservation.%20Additionally%2C%20the%20relationships%20among%20digital%20elevation%20model%20resolution%2C%20TFL%20density%2C%20and%20the%20generated%20shaded%20relief%20map%20scales%20are%20explored.%20The%20proposed%20TFLC-CGAN%20offers%20an%20efficient%20solution%20for%20large-scale%20production%20of%20SSSR-Maps.%22%2C%22date%22%3A%222025-12-31%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1080%5C%2F10106049.2025.2459099%22%2C%22ISSN%22%3A%221010-6049%2C%201752-0762%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F10106049.2025.2459099%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-21T11%3A38%3A12Z%22%7D%7D%2C%7B%22key%22%3A%22NCK7CHD6%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Jiang%20et%20al.%22%2C%22parsedDate%22%3A%222025-11-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BJiang%2C%20W.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F15230406.2025.2484209%26%23039%3B%26gt%3BConstruction%20of%20a%20small-scale%20relief%20shading%20neural%20network%20model%20based%20on%20the%20attention%20mechanism%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Construction%20of%20a%20small-scale%20relief%20shading%20neural%20network%20model%20based%20on%20the%20attention%20mechanism%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenping%22%2C%22lastName%22%3A%22Jiang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yue%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haijun%22%2C%22lastName%22%3A%22Ding%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Han%22%2C%22lastName%22%3A%22Jiang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Daping%22%2C%22lastName%22%3A%22Xi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yuan%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peiyang%22%2C%22lastName%22%3A%22Ma%22%7D%5D%2C%22abstractNote%22%3A%22Relief%20shading%20is%20a%20primary%20technique%20for%20representing%20the%20three-dimensional%20effects%20of%20terrain%20on%20a%20two-dimensional%20plane.%20This%20study%20applies%20deep%20learning%20to%20generate%20small-scale%20Swiss-style%20relief%20shading%20maps.%20An%20attention%20module%20is%20defined%20to%20focus%20on%20key%20information%20in%20feature%20maps.%20Based%20on%20the%20characteristics%20of%20relief%20shading%20and%20digital%20elevation%20model%20%28DEM%29%20data%2C%20U-Net%20is%20adjusted%20and%20optimized%2C%20resulting%20in%20the%20design%20and%20construction%20of%20an%20end-to-end%20relief%20shading%20neural%20network%20model%20%28Attention%20Hillshading%20U-Net%2C%20A-UNet%29%20built%20on%20a%20limited%20training%20dataset.%20By%20learning%20the%20terrain-shaping%20patterns%20from%20Swiss-style%20shading%20maps%2C%20the%20model%20overcomes%20the%20challenges%20posed%20by%20high%20terrain%20complexity%20and%20insufficient%20representation%20of%20landform%20morphology%20in%20small-scale%20relief%20shading%20maps.%20The%20study%20further%20investigates%20the%20impact%20of%20hyperparameters%20on%20the%20performance%20of%20the%20model%20in%20generating%20small-scale%20relief%20shading%20maps.%20Based%20on%20the%20quantitative%20performance%20of%20the%20model%20under%20different%20hyperparameter%20settings%20and%20adaptability%20to%20lower-resolution%20DEMs%2C%20the%20optimal%20hyperparameters%20for%20the%20model%20are%20determined.%20Additionally%2C%20experimental%20comparisons%20of%20small-scale%20relief%20shading%20map%20generation%20using%20A-UNet%20and%20other%20network%20models%20show%20that%2C%20compared%20to%20U-Net%20and%20its%20variants%2C%20A-UNet%20demonstrates%20superior%20adaptability%20to%20different%20pixel%20sizes%2C%20better%20terrain%20simplification%2C%20and%20enhanced%20generalization%20to%20various%20landform%20types.%22%2C%22date%22%3A%222025-11-02%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2025.2484209%22%2C%22ISSN%22%3A%221523-0406%2C%201545-0465%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F15230406.2025.2484209%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-21T11%3A38%3A53Z%22%7D%7D%2C%7B%22key%22%3A%22LH2FVTR7%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yan%20et%20al.%22%2C%22parsedDate%22%3A%222024-12-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYan%2C%20L.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2024.2391409%26%23039%3B%26gt%3BIntegrating%20terrain%20structure%20characteristics%20into%20generative%20adversarial%20nets%20for%20hillshade%20generation%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Integrating%20terrain%20structure%20characteristics%20into%20generative%20adversarial%20nets%20for%20hillshade%20generation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lingrui%22%2C%22lastName%22%3A%22Yan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tinghua%22%2C%22lastName%22%3A%22Ai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Aji%22%2C%22lastName%22%3A%22Gao%22%7D%5D%2C%22abstractNote%22%3A%22A%20hillshade%20is%20a%20visualization%20technique%20that%20represents%20three-dimensional%20terrain%20in%20a%20two-dimensional%20plane%20by%20illumination%20mapping.%20The%20digital%20relief%20shading%20promotes%20the%20visualization%20of%20the%20terrain%20efficiently%20using%20DEM%20data.%20However%2C%20compared%20with%20manual%20shading%2C%20the%20digital%20algorithm-based%20method%20still%20has%20a%20gap%20in%20the%20visual%20effect%20of%20illumination%20strategy%20and%20terrain%20generalization.%20The%20typical%20landform%20characteristics%20and%20micro-geomorphic%20properties%20usually%20are%20destroyed.%20The%20reason%20is%20that%20the%20complexity%20of%20illumination%20rules%20is%20hard%20to%20summarize%20for%20different%20terrain%20phenomena.%20In%20this%20study%2C%20namely%20the%20data-driven%20artificial%20intelligent%20method%2C%20an%20alternative%20strategy%20based%20on%20generative%20adversarial%20networks%20%28GANs%29%20is%20proposed%20rather%20than%20the%20rule-based%20method.%20The%20DEM%20and%20the%20terrain%20skeleton%20lines%20are%20input%20to%20the%20model%20and%20part%20of%20the%20manual%20relief%20shading%20of%20Swisstopo%20is%20used%20for%20the%20training%20samples.%20Through%20the%20GAN%20training%20and%20learning%2C%20the%20manual%20skill%20imbedded%20in%20the%20hillshade%20is%20discovered%20in%20the%20generation%20model.%20The%20results%20show%20that%20the%20proposed%20model%20performs%20better%20than%20the%20digital%20relief%20shading%20on%20various%20landforms%20in%20aesthetic%20visualization%20and%20geo-scientific%20representation.%20Compared%20to%20other%20models%2C%20including%20other%20convolution%20neural%20network%20%28CNN%29%20based%20methods%2C%20terrain%20structure%20is%20maintained%20more%20significantly%20through%20the%20proposed%20model.%22%2C%22date%22%3A%222024-12-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2024.2391409%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2024.2391409%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-12T22%3A23%3A16Z%22%7D%7D%2C%7B%22key%22%3A%22GFXU7JN7%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Farmakis-Serebryakova%20et%20al.%22%2C%22parsedDate%22%3A%222024-09-12%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BFarmakis-Serebryakova%2C%20M.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F13%5C%2F9%5C%2F326%26%23039%3B%26gt%3BScale-%20and%20Resolution-Adapted%20Shaded%20Relief%20Generation%20Using%20U-Net%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Scale-%20and%20Resolution-Adapted%20Shaded%20Relief%20Generation%20Using%20U-Net%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Marianna%22%2C%22lastName%22%3A%22Farmakis-Serebryakova%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Magnus%22%2C%22lastName%22%3A%22Heitzler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22On%20many%20maps%2C%20relief%20shading%20is%20one%20of%20the%20most%20significant%20graphical%20elements.%20Modern%20relief%20shading%20techniques%20include%20neural%20networks.%20To%20generate%20such%20shading%20automatically%20at%20an%20arbitrary%20scale%2C%20one%20needs%20to%20consider%20how%20the%20resolution%20of%20the%20input%20digital%20elevation%20model%20%28DEM%29%20relates%20to%20the%20neural%20network%20process%20and%20the%20maps%20used%20for%20training.%20Currently%2C%20there%20is%20no%20clear%20guidance%20on%20which%20DEM%20resolution%20to%20use%20to%20generate%20relief%20shading%20at%20specific%20scales.%20To%20address%20this%20gap%2C%20we%20trained%20the%20U-Net%20models%20on%20swisstopo%20manual%20relief%20shadings%20of%20Switzerland%20at%20four%20different%20scales%20and%20using%20four%20different%20resolutions%20of%20SwissALTI3D%20DEM.%20An%20interactive%20web%20application%20designed%20for%20this%20study%20allows%20users%20to%20outline%20a%20random%20area%20and%20compare%20histograms%20of%20varying%20brightness%20between%20predictions%20and%20manual%20relief%20shadings.%20The%20results%20showed%20that%20DEM%20resolution%20and%20output%20scale%20influence%20the%20appearance%20of%20the%20relief%20shading%2C%20with%20an%20overall%20scale%5C%2Fresolution%20ratio.%20We%20present%20guidelines%20for%20generating%20relief%20shading%20with%20neural%20networks%20for%20arbitrary%20areas%20and%20scales.%22%2C%22date%22%3A%222024-09-12%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi13090326%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F13%5C%2F9%5C%2F326%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-21T11%3A39%3A19Z%22%7D%7D%2C%7B%22key%22%3A%22YGET6BF8%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Bian%20et%20al.%22%2C%22parsedDate%22%3A%222024%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBian%2C%20C.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F10106049.2024.2322085%26%23039%3B%26gt%3BGeneration%20and%20optimisation%20of%20colour-shaded%20relief%20maps%20using%20neural%20networks%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Generation%20and%20optimisation%20of%20colour-shaded%20relief%20maps%20using%20neural%20networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chenglin%22%2C%22lastName%22%3A%22Bian%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shaomei%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jingzhen%22%2C%22lastName%22%3A%22Ma%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guangzhi%22%2C%22lastName%22%3A%22Yin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bowei%22%2C%22lastName%22%3A%22Wen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Linghui%22%2C%22lastName%22%3A%22Kong%22%7D%5D%2C%22abstractNote%22%3A%22Shaded%20relief%20is%20a%20primary%20tool%20used%20to%20effectively%20portray%20three-dimensional%20terrain%20on%20a%20two-dimensional%20plane%20surface.%20Colour-shaded%20relief%20maps%20use%20colour%20variations%20to%20effectively%20represent%20elevation%20changes%20and%20even%20capture%20the%20natural%20hues%20of%20surface%20landscapes.%20This%20study%20evaluates%20and%20proposes%20methods%20for%20creating%20colour-shaded%20relief%20maps%20using%20neural%20networks.%20Four%20distinct%20neural%20network%20shading%20models%20were%20trained%20using%20a%20dataset%20composed%20of%20slices%20from%20%5Cu2018digital%20elevation%20model%20%28DEM%29%5Cu2013manual%20colour-shaded%20relief%20maps%5Cu2019.%20The%20aim%20was%20to%20generate%20colour-shaded%20relief%20maps%20based%20on%20DEM%20data%20specific%20to%20the%20mapped%20area.%20The%20experimental%20results%20suggest%20that%20all%20four%20types%20of%20network-based%20shaded%20relief%20maps%20models%20effectively%20depict%20the%20primary%20terrain%20features%20within%20the%20mapped%20area.%20The%20CGAN%20%28UNet%20generator%29%20model%20yields%20the%20most%20optimal%20results%2C%20showcasing%20the%20superior%20cartographic%20generalisation%20of%20relief%20and%20delineation%20of%20terrain%20structures%20compared%20with%20the%20other%20models.%20Specialised%20training%20was%20conducted%20for%20the%20CGAN%20%28UNet%20generator%29%20shaded%20relief%20model%20to%20improve%20the%20clarity%20and%20authenticity%20of%20colour-shaded%20relief%20maps.%22%2C%22date%22%3A%2201%5C%2F2024%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1080%5C%2F10106049.2024.2322085%22%2C%22ISSN%22%3A%221010-6049%2C%201752-0762%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F10106049.2024.2322085%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-21T11%3A39%3A03Z%22%7D%7D%2C%7B%22key%22%3A%22M9V5T4NH%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20et%20al.%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLi%2C%20S.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F7%5C%2F374%26%23039%3B%26gt%3BGeneration%20Method%20for%20Shaded%20Relief%20Based%20on%20Conditional%20Generative%20Adversarial%20Nets%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Generation%20Method%20for%20Shaded%20Relief%20Based%20on%20Conditional%20Generative%20Adversarial%20Nets%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shaomei%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guangzhi%22%2C%22lastName%22%3A%22Yin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jingzhen%22%2C%22lastName%22%3A%22Ma%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bowei%22%2C%22lastName%22%3A%22Wen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhao%22%2C%22lastName%22%3A%22Zhou%22%7D%5D%2C%22abstractNote%22%3A%22Relief%20shading%20is%20the%20primary%20method%20for%20effectively%20representing%20three-dimensional%20terrain%20on%20a%20two-dimensional%20plane.%20Despite%20its%20expressiveness%2C%20manual%20relief%20shading%20is%20difficult%20and%20time-consuming.%20In%20contrast%2C%20although%20analytical%20relief%20shading%20is%20fast%20and%20efficient%2C%20the%20visual%20effect%20is%20quite%20different%20from%20that%20of%20manual%20relief%20shading%20due%20to%20the%20low%20degree%20of%20terrain%20generalisation%2C%20inability%20to%20adjust%20local%20illumination%2C%20and%20difficulty%20in%20exaggerating%20and%20selective%20representation.%20We%20introduce%20deep%20learning%20technology%20to%20propose%20a%20generation%20method%20for%20shaded%20relief%20based%20on%20conditional%20generative%20adversarial%20nets.%20This%20method%20takes%20the%20set%20of%20manual%20relief%20shading-digital%20elevation%20model%20%28DEM%29%20slices%20as%20a%20priori%20knowledge%2C%20optimises%20network%20parameters%20through%20a%20continuous%20game%20of%20%5Cu201cgeneration-discrimination%5Cu201d%2C%20and%20produces%20a%20shaded%20relief%20map%20of%20any%20region%20based%20on%20the%20DEM.%20Test%20results%20indicate%20that%20the%20proposed%20method%20retains%20the%20advantages%20of%20manual%20relief%20shading%20and%20can%20quickly%20generate%20shaded%20relief%20with%20quality%20and%20artistic%20style%20similar%20to%20those%20of%20manual%20shading.%20Compared%20with%20other%20networks%2C%20the%20shaded%20relief%20generated%20by%20the%20proposed%20method%20not%20only%20depicts%20the%20terrain%20clearly%20but%20also%20achieves%20a%20good%20generalisation%20effect.%20Moreover%2C%20through%20the%20use%20of%20an%20adversarial%20structure%2C%20the%20network%20demonstrates%20stronger%20cross-scale%20generation%20ability.%22%2C%22date%22%3A%222022%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi11070374%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F7%5C%2F374%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-21T11%3A39%3A08Z%22%7D%7D%2C%7B%22key%22%3A%22Y9ULC39N%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Jenny%20et%20al.%22%2C%22parsedDate%22%3A%222021-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BJenny%2C%20B.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9222263%26%23039%3B%26gt%3BCartographic%20Relief%20Shading%20with%20Neural%20Networks%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Cartographic%20Relief%20Shading%20with%20Neural%20Networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bernhard%22%2C%22lastName%22%3A%22Jenny%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Magnus%22%2C%22lastName%22%3A%22Heitzler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dilpreet%22%2C%22lastName%22%3A%22Singh%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Marianna%22%2C%22lastName%22%3A%22Farmakis-Serebryakova%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jeffery%20Chieh%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22Shaded%20relief%20is%20an%20effective%20method%20for%20visualising%20terrain%20on%20topographic%20maps%2C%20especially%20when%20the%20direction%20of%20illumination%20is%20adapted%20locally%20to%20emphasise%20individual%20terrain%20features.%20However%2C%20digital%20shading%20algorithms%20are%20unable%20to%20fully%20match%20the%20expressiveness%20of%20hand-crafted%20masterpieces%2C%20which%20are%20created%20through%20a%20laborious%20process%20by%20highly%20specialised%20cartographers.%20We%20replicate%20hand-drawn%20relief%20shading%20using%20U-Net%20neural%20networks.%20The%20deep%20neural%20networks%20are%20trained%20with%20manual%20shaded%20relief%20images%20of%20the%20Swiss%20topographic%20map%20series%20and%20terrain%20models%20of%20the%20same%20area.%20The%20networks%20generate%20shaded%20relief%20that%20closely%20resemble%20hand-drawn%20shaded%20relief%20art.%20The%20networks%20learn%20essential%20design%20principles%20from%20manual%20relief%20shading%20such%20as%20removing%20unnecessary%20terrain%20details%2C%20locally%20adjusting%20the%20illumination%20direction%20to%20accentuate%20individual%20terrain%20features%2C%20and%20varying%20brightness%20to%20emphasise%20larger%20landforms.%20Neural%20network%20shadings%20are%20generated%20from%20digital%20elevation%20models%20in%20a%20few%20seconds%2C%20and%20a%20study%20with%2018%20relief%20shading%20experts%20found%20that%20they%20are%20of%20high%20quality.%22%2C%22date%22%3A%222021-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FTVCG.2020.3030456%22%2C%22ISSN%22%3A%221941-0506%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9222263%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-21T11%3A38%3A40Z%22%7D%7D%5D%7D
Farmakis-Serebryakova, M. et al. Scale- and Resolution-Adapted Shaded Relief Generation Using U-Net. 2024
Bian, C. et al. Generation and optimisation of colour-shaded relief maps using neural networks. 2024
Jenny, B. et al. Cartographic Relief Shading with Neural Networks. 2021
Style Transfer
5447768
style transfer
1
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22GYNY77TX%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wang%20et%20al.%22%2C%22parsedDate%22%3A%222025-05-24%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWang%2C%20C.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1080%5C%2F13658816.2025.2507844%26%23039%3B%26gt%3BCartoAgent%3A%20a%20multimodal%20large%20language%20model-powered%20multi-agent%20cartographic%20framework%20for%20map%20style%20transfer%20and%20evaluation%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22CartoAgent%3A%20a%20multimodal%20large%20language%20model-powered%20multi-agent%20cartographic%20framework%20for%20map%20style%20transfer%20and%20evaluation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chenglong%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yuhao%22%2C%22lastName%22%3A%22Kang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhaoya%22%2C%22lastName%22%3A%22Gong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pengjun%22%2C%22lastName%22%3A%22Zhao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yu%22%2C%22lastName%22%3A%22Feng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenjia%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ge%22%2C%22lastName%22%3A%22Li%22%7D%5D%2C%22abstractNote%22%3A%22The%20rapid%20development%20of%20generative%20artificial%20intelligence%20%28GenAI%29%20presents%20new%20opportunities%20to%20advance%20the%20cartographic%20process.%20Previous%20studies%20have%20either%20overlooked%20the%20artistic%20aspects%20of%20m...%22%2C%22date%22%3A%222025-5-24%22%2C%22language%22%3A%22EN%22%2C%22DOI%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1080%5C%2F13658816.2025.2507844%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-06-02T20%3A08%3A06Z%22%7D%7D%2C%7B%22key%22%3A%22MNPIVGBG%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wu%20et%20al.%22%2C%22parsedDate%22%3A%222024-06-16%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWu%2C%20S.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F10656169%5C%2F%26%23039%3B%26gt%3BStegoGAN%3A%20Leveraging%20Steganography%20for%20Non-Bijective%20Image-to-Image%20Translation%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22StegoGAN%3A%20Leveraging%20Steganography%20for%20Non-Bijective%20Image-to-Image%20Translation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sidi%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yizi%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Samuel%22%2C%22lastName%22%3A%22Mermet%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Konrad%22%2C%22lastName%22%3A%22Schindler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nicolas%22%2C%22lastName%22%3A%22Gonthier%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Loic%22%2C%22lastName%22%3A%22Landrieu%22%7D%5D%2C%22abstractNote%22%3A%22Most%20image-to-image%20translation%20models%20postulate%20that%20a%20unique%20correspondence%20exists%20between%20the%20semantic%20classes%20of%20the%20source%20and%20target%20domains.%20However%2C%20this%20assumption%20does%20not%20always%20hold%20in%20real-world%20scenarios%20due%20to%20divergent%20distributions%2C%20different%20class%20sets%2C%20and%20asymmetrical%20information%20representation.%20As%20conventional%20GANs%20attempt%20to%20generate%20images%20that%20match%20the%20distribution%20of%20the%20target%20domain%2C%20they%20may%20hallucinate%20spurious%20instances%20of%20classes%20absent%20from%20the%20source%20domain%2C%20thereby%20diminishing%20the%20usefulness%20and%20reliability%20of%20translated%20images.%20CycleGAN-based%20methods%20are%20also%20known%20to%20hide%20the%20mismatched%20information%20in%20the%20generated%20images%20to%20bypass%20cycle%20consistency%20objectives%2C%20a%20process%20known%20as%20steganography.%20In%20response%20to%20the%20challenge%20of%20non-bijective%20image%20translation%2C%20we%20introduce%20StegoGAN%2C%20a%20novel%20model%20that%20leverages%20steganography%20to%20prevent%20spurious%20features%20in%20generated%20images.%20Our%20approach%20enhances%20the%20semantic%20consistency%20of%20the%20translated%20images%20without%20requiring%20additional%20postprocessing%20or%20supervision.%20Our%20experimental%20evaluations%20demonstrate%20that%20StegoGAN%20outperforms%20existing%20GAN-based%20models%20across%20various%20non-bijective%20image-to-image%20translation%20tasks%2C%20both%20qualitatively%20and%20quantitatively.%20Our%20code%20and%20pretrained%20models%20are%20accessible%20at%20this%20https%20URL.%22%2C%22date%22%3A%222024-6-16%22%2C%22proceedingsTitle%22%3A%222024%20IEEE%5C%2FCVF%20Conference%20on%20Computer%20Vision%20and%20Pattern%20Recognition%20%28CVPR%29%22%2C%22conferenceName%22%3A%222024%20IEEE%5C%2FCVF%20Conference%20on%20Computer%20Vision%20and%20Pattern%20Recognition%20%28CVPR%29%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FCVPR52733.2024.00757%22%2C%22ISBN%22%3A%22979-8-3503-5300-6%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F10656169%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-12T12%3A18%3A53Z%22%7D%7D%2C%7B%22key%22%3A%226Y9I38KL%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Hong%20et%20al.%22%2C%22parsedDate%22%3A%222023-10-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BHong%2C%20S.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1080%5C%2F17538947.2023.2202422%26%23039%3B%26gt%3BAesthetic%20style%20transferring%20method%20based%20on%20deep%20neural%20network%20between%20Chinese%20landscape%20painting%20and%20classical%20private%20garden%26%23039%3Bs%20virtual%20scenario%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Aesthetic%20style%20transferring%20method%20based%20on%20deep%20neural%20network%20between%20Chinese%20landscape%20painting%20and%20classical%20private%20garden%27s%20virtual%20scenario%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shuai%22%2C%22lastName%22%3A%22Hong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jie%22%2C%22lastName%22%3A%22Shen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guonian%22%2C%22lastName%22%3A%22L%5Cu00fc%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiaoyan%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yirui%22%2C%22lastName%22%3A%22Mao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nina%22%2C%22lastName%22%3A%22Sun%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Long%22%2C%22lastName%22%3A%22Tang%22%7D%5D%2C%22abstractNote%22%3A%22Most%20of%20the%20existing%20virtual%20scenarios%20built%20for%20the%20digital%20protection%20of%20Chinese%20classical%20private%20gardens%20are%20too%20modern%20in%20expression%20style%20to%20show%20the%20aesthetic%20significance%20of%20their%20historica...%22%2C%22date%22%3A%222023-10-02%22%2C%22language%22%3A%22EN%22%2C%22DOI%22%3A%22%22%2C%22ISSN%22%3A%221753-8947%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1080%5C%2F17538947.2023.2202422%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-17T18%3A14%3A24Z%22%7D%7D%2C%7B%22key%22%3A%223HDEBWV4%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ala%5Cu00e7am%20et%20al.%22%2C%22parsedDate%22%3A%222022-07-25%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BAla%5Cu00e7am%2C%20S.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fpublicaciones.ucuenca.edu.ec%5C%2Fojs%5C%2Findex.php%5C%2Festoa%5C%2Farticle%5C%2Fview%5C%2F3942%26%23039%3B%26gt%3BReciprocal%20style%20and%20information%20transfer%20between%20historical%20Istanbul%20Pervititch%20Maps%20and%20satellite%20views%20using%20machine%20learning%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Reciprocal%20style%20and%20information%20transfer%20between%20historical%20Istanbul%20Pervititch%20Maps%20and%20satellite%20views%20using%20machine%20learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sema%22%2C%22lastName%22%3A%22Ala%5Cu00e7am%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ilker%22%2C%22lastName%22%3A%22Karadag%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Orkan%20Zeynel%22%2C%22lastName%22%3A%22G%5Cu00fczelci%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20maps%20contain%20significant%20data%20on%20the%20cultural%2C%20social%2C%20and%20urban%20character%20of%20cities.%20However%2C%20most%20historical%20maps%20utilize%20specific%20notation%20methods%20that%20differ%20from%20those%20commonly%20used%20today%20and%20converting%20these%20maps%20to%20more%20recent%20formats%20can%20be%20highly%20labor-intensive.%20This%20study%20is%20intended%20to%20demonstrate%20how%20a%20machine%20learning%20%28ML%29%20technique%20can%20be%20used%20to%20transform%20old%20maps%20of%20Istanbul%20into%20spatial%20data%20that%20simulates%20modern%20satellite%20views%20%28SVs%29%20through%20a%20reciprocal%20map%20conversion%20framework.%20With%20this%20aim%2C%20the%20Istanbul%20Pervititch%20Maps%20%28IPMs%29%20made%20by%20Jacques%20Pervititch%20in%201922-1945%20and%20current%20SVs%20were%20used%20to%20test%20and%20evaluate%20the%20proposed%20framework.%20The%20study%20consists%20of%20a%20style%20and%20information%20transfer%20in%20two%20stages%3A%20%28i%29%20from%20IPMs%20to%20SVs%2C%20and%20%28ii%29%20from%20SVs%20to%20IPMs%20using%20CycleGAN%20%28a%20type%20of%20generative%20adversarial%20network%29.%20The%20initial%20results%20indicate%20that%20the%20proposed%20framework%20can%20transfer%20attributes%20such%20as%20green%20areas%2C%20construction%20techniques%5C%2Fmaterials%2C%20and%20labels%5C%2Ftags.%22%2C%22date%22%3A%222022-07-25%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.18537%5C%2Fest.v011.n022.a06%22%2C%22ISSN%22%3A%221390-9274%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fpublicaciones.ucuenca.edu.ec%5C%2Fojs%5C%2Findex.php%5C%2Festoa%5C%2Farticle%5C%2Fview%5C%2F3942%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-26T19%3A38%3A42Z%22%7D%7D%2C%7B%22key%22%3A%222PK3LYSR%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wu%20and%20Biljecki%22%2C%22parsedDate%22%3A%222022-07-03%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWu%2C%20A.N.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2022.2041643%26%23039%3B%26gt%3BGANmapper%3A%20geographical%20data%20translation%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22GANmapper%3A%20geographical%20data%20translation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Abraham%20Noah%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Filip%22%2C%22lastName%22%3A%22Biljecki%22%7D%5D%2C%22abstractNote%22%3A%22We%20present%20a%20new%20method%20to%20create%20spatial%20data%20using%20a%20generative%20adversarial%20network%20%28GAN%29.%20Our%20contribution%20uses%20coarse%20and%20widely%20available%20geospatial%20data%20to%20create%20maps%20of%20less%20available%20features%20at%20the%20finer%20scale%20in%20the%20built%20environment%2C%20bypassing%20their%20traditional%20acquisition%20techniques%20%28e.g.%20satellite%20imagery%20or%20land%20surveying%29.%20In%20the%20work%2C%20we%20employ%20land%20use%20data%20and%20road%20networks%20as%20input%20to%20generate%20building%20footprints%20and%20conduct%20experiments%20in%209%20cities%20around%20the%20world.%20The%20method%2C%20which%20we%20implement%20in%20a%20tool%20we%20release%20openly%2C%20enables%20the%20translation%20of%20one%20geospatial%20dataset%20to%20another%20with%20high%20fidelity%20and%20morphological%20accuracy.%20It%20may%20be%20especially%20useful%20in%20locations%20missing%20detailed%20and%20high-resolution%20data%20and%20those%20that%20are%20mapped%20with%20uncertain%20or%20heterogeneous%20quality%2C%20such%20as%20much%20of%20OpenStreetMap.%20The%20quality%20of%20the%20results%20is%20influenced%20by%20the%20urban%20form%20and%20scale.%20In%20most%20cases%2C%20the%20experiments%20suggest%20promising%20performance%20as%20the%20method%20tends%20to%20truthfully%20indicate%20the%20locations%2C%20amount%2C%20and%20shape%20of%20buildings.%20The%20work%20has%20the%20potential%20to%20support%20several%20applications%2C%20such%20as%20energy%2C%20climate%2C%20and%20urban%20morphology%20studies%20in%20areas%20previously%20lacking%20required%20data%20or%20inpainting%20geospatial%20data%20in%20regions%20with%20incomplete%20data.%22%2C%22date%22%3A%222022-07-03%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2022.2041643%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2022.2041643%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A45%3A47Z%22%7D%7D%2C%7B%22key%22%3A%22VENS4BLI%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ye%20et%20al.%22%2C%22parsedDate%22%3A%222022-03-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYe%2C%20X.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1177%5C%2F23998083211023516%26%23039%3B%26gt%3BMasterplanGAN%3A%20Facilitating%20the%20smart%20rendering%20of%20urban%20master%20plans%20via%20generative%20adversarial%20networks%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22MasterplanGAN%3A%20Facilitating%20the%20smart%20rendering%20of%20urban%20master%20plans%20via%20generative%20adversarial%20networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xinyue%22%2C%22lastName%22%3A%22Ye%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiaxin%22%2C%22lastName%22%3A%22Du%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yu%22%2C%22lastName%22%3A%22Ye%22%7D%5D%2C%22abstractNote%22%3A%22This%20study%20proposes%20a%20prototype%20for%20the%20smart%20rendering%20of%20urban%20master%20plans%20via%20artificial%20intelligence%20algorithms%2C%20a%20process%20which%20is%20time-consuming%20and%20relies%20on%20professionals%3F%20experience.%20With%20the%20help%20of%20crowdsourced%20data%20and%20generative%20adversarial%20networks%20%28GAN%29%2C%20a%20generation%20model%20was%20trained%20to%20provide%20colorful%20rendering%20of%20master%20plans%20similar%20to%20those%20produced%20by%20experienced%20urban%20designers.%20Approximately%205000%20master%20plans%20from%20Pinterest%20were%20processed%20and%20CycleGAN%20was%20applied%20as%20the%20core%20algorithm%20to%20build%20this%20model%2C%20the%20so-called%20MasterplanGAN.%20Using%20the%20uncolored%20input%20design%20files%20in%20an%20AutoCAD%20format%2C%20the%20MasterplanGAN%20can%20provide%20master%20plan%20renderings%20within%20a%20few%20seconds.%20The%20validation%20of%20the%20generated%20results%20was%20achieved%20using%20quantitative%20and%20qualitative%20judgments.%20The%20achievements%20of%20this%20study%20contribute%20to%20the%20development%20of%20automatic%20generation%20of%20previously%20subjective%20and%20experience-oriented%20processes%2C%20which%20can%20serve%20as%20a%20useful%20tool%20for%20urban%20designers%20and%20planners%20to%20save%20time%20in%20real%20projects.%20It%20also%20contributes%20to%20push%20the%20methodological%20boundaries%20of%20urban%20design%20by%20addressing%20urban%20design%20requirements%20with%20new%20urban%20data%20and%20new%20techniques.%20This%20initial%20exploration%20indicates%20that%20a%20large%20but%20clear%20picture%20of%20computational%20urban%20design%20can%20be%20presented%2C%20integrating%20scientific%20thinking%2C%20design%2C%20and%20computer%20techniques.%22%2C%22date%22%3A%222022-03-01%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1177%5C%2F23998083211023516%22%2C%22ISSN%22%3A%222399-8083%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1177%5C%2F23998083211023516%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A54%3A49Z%22%7D%7D%2C%7B%22key%22%3A%22JWJRRJVW%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Christophe%20et%20al.%22%2C%22parsedDate%22%3A%222022-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BChristophe%2C%20S.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F23729333.2022.2031554%26%23039%3B%26gt%3BNeural%20map%20style%20transfer%20exploration%20with%20GANs%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Neural%20map%20style%20transfer%20exploration%20with%20GANs%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sidonie%22%2C%22lastName%22%3A%22Christophe%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Samuel%22%2C%22lastName%22%3A%22Mermet%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Morgan%22%2C%22lastName%22%3A%22Laurent%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guillaume%22%2C%22lastName%22%3A%22Touya%22%7D%5D%2C%22abstractNote%22%3A%22Neural%20Style%20Transfer%20is%20a%20Computer%20Vision%20topic%20intending%20to%20transfer%20the%20visual%20appearance%20or%20the%20style%20of%20images%20to%20other%20images.%20Developments%20in%20deep%20learning%20nicely%20generate%20stylized%20images%20from%20texture-based%20examples%20or%20transfer%20the%20style%20of%20a%20photograph%20to%20another%20one.%20In%20map%20design%2C%20the%20style%20is%20a%20multi-dimensional%20complex%20problem%20related%20to%20recognizable%20visual%20salient%20features%20and%20topological%20arrangements%2C%20supporting%20the%20description%20of%20geographic%20spaces%20at%20a%20specific%20scale.%20The%20map%20style%20transfer%20is%20still%20at%20stake%20to%20generate%20a%20diversity%20of%20possible%20new%20styles%20to%20render%20geographical%20features.%20Generative%20adversarial%20Networks%20%28GANs%29%20techniques%2C%20well%20supporting%20image-to-image%20translation%20tasks%2C%20offer%20new%20perspectives%20for%20map%20style%20transfer.%20We%20propose%20to%20use%20accessible%20GAN%20architectures%2C%20in%20order%20to%20experiment%20and%20assess%20neural%20map%20style%20transfer%20to%20ortho-images%2C%20while%20using%20different%20map%20designs%20of%20various%20geographic%20spaces%2C%20from%20simple-styled%20%28Plan%20maps%29%20to%20complex-styled%20%28old%20Cassini%2C%20Etat-Major%2C%20or%20Scan50%20B%26amp%3BW%29.%20This%20transfer%20task%20and%20our%20global%20protocol%20are%20presented%2C%20including%20the%20sampling%20grid%2C%20the%20training%20and%20test%20of%20Pix2Pix%20and%20CycleGAN%20models%2C%20such%20as%20the%20perceptual%20assessment%20of%20the%20generated%20outputs.%20Promising%20results%20are%20discussed%2C%20opening%20research%20issues%20for%20neural%20map%20style%20transfer%20exploration%20with%20GANs.%22%2C%22date%22%3A%222022-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F23729333.2022.2031554%22%2C%22ISSN%22%3A%222372-9333%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F23729333.2022.2031554%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-08-05T22%3A26%3A32Z%22%7D%7D%2C%7B%22key%22%3A%22ZRW98K3T%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20et%20al.%22%2C%22parsedDate%22%3A%222021-11-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLi%2C%20Z.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3486635.3491070%26%23039%3B%26gt%3BSynthetic%20Map%20Generation%20to%20Provide%20Unlimited%20Training%20Data%20for%20Historical%20Map%20Text%20Detection%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Synthetic%20Map%20Generation%20to%20Provide%20Unlimited%20Training%20Data%20for%20Historical%20Map%20Text%20Detection%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zekun%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Runyu%22%2C%22lastName%22%3A%22Guan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qianmu%22%2C%22lastName%22%3A%22Yu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yao-Yi%22%2C%22lastName%22%3A%22Chiang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Craig%20A.%22%2C%22lastName%22%3A%22Knoblock%22%7D%5D%2C%22abstractNote%22%3A%22Many%20historical%20map%20sheets%20are%20publicly%20available%20for%20studies%20that%20require%20long-term%20historical%20geographic%20data.%20The%20cartographic%20design%20of%20these%20maps%20includes%20a%20combination%20of%20map%20symbols%20and%20text%20labels.%20Automatically%20reading%20text%20labels%20from%20map%20images%20could%20greatly%20speed%20up%20the%20map%20interpretation%20and%20helps%20generate%20rich%20metadata%20describing%20the%20map%20content.%20Many%20text%20detection%20algorithms%20have%20been%20proposed%20to%20locate%20text%20regions%20in%20map%20images%20automatically%2C%20but%20most%20of%20the%20algorithms%20are%20trained%20on%20out-of-domain%20datasets%20%28e.g.%2C%20scenic%20images%29.%20Training%20data%20determines%20the%20quality%20of%20machine%20learning%20models%2C%20and%20manually%20annotating%20text%20regions%20in%20map%20images%20is%20labor-extensive%20and%20time-consuming.%20On%20the%20other%20hand%2C%20existing%20geographic%20data%20sources%2C%20such%20as%20Open-StreetMap%20%28OSM%29%2C%20contain%20machine-readable%20map%20layers%2C%20which%20allow%20us%20to%20separate%20out%20the%20text%20layer%20and%20obtain%20text%20label%20annotations%20easily.%20However%2C%20the%20cartographic%20styles%20between%20OSM%20map%20tiles%20and%20historical%20maps%20are%20significantly%20different.%20This%20paper%20proposes%20a%20method%20to%20automatically%20generate%20an%20unlimited%20amount%20of%20annotated%20historical%20map%20images%20for%20training%20text%20detection%20models.%20We%20use%20a%20style%20transfer%20model%20to%20convert%20contemporary%20map%20images%20into%20historical%20style%20and%20place%20text%20labels%20upon%20them.%20We%20show%20that%20the%20state-of-the-art%20text%20detection%20models%20%28e.g.%2C%20PSENet%29%20can%20benefit%20from%20the%20synthetic%20historical%20maps%20and%20achieve%20significant%20improvement%20for%20historical%20map%20text%20detection.%22%2C%22date%22%3A%22November%202%2C%202021%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%204th%20ACM%20SIGSPATIAL%20International%20Workshop%20on%20AI%20for%20Geographic%20Knowledge%20Discovery%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3486635.3491070%22%2C%22ISBN%22%3A%22978-1-4503-9120-7%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3486635.3491070%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A07%3A57Z%22%7D%7D%2C%7B%22key%22%3A%2265DFP74U%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhao%20et%20al.%22%2C%22parsedDate%22%3A%222021-07-04%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZhao%2C%20B.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2021.1910075%26%23039%3B%26gt%3BDeep%20fake%20geography%3F%20When%20geospatial%20data%20encounter%20Artificial%20Intelligence%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep%20fake%20geography%3F%20When%20geospatial%20data%20encounter%20Artificial%20Intelligence%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bo%22%2C%22lastName%22%3A%22Zhao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shaozeng%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chunxue%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yifan%22%2C%22lastName%22%3A%22Sun%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chengbin%22%2C%22lastName%22%3A%22Deng%22%7D%5D%2C%22abstractNote%22%3A%22The%20developing%20convergence%20of%20Artificial%20Intelligence%20and%20GIScience%20has%20raised%20a%20concern%20on%20the%20emergence%20of%20deep%20fake%20geography%20and%20its%20potentials%20in%20transforming%20human%20perception%20of%20the%20geographic%20world.%20Situating%20fake%20geography%20under%20the%20context%20of%20modern%20cartography%20and%20GIScience%2C%20this%20paper%20presents%20an%20empirical%20study%20to%20dissect%20the%20algorithmic%20mechanism%20of%20falsifying%20satellite%20images%20with%20non-existent%20landscape%20features.%20To%20demonstrate%20our%20pioneering%20attempt%20at%20deep%20fake%20detection%2C%20a%20robust%20approach%20is%20then%20proposed%20and%20evaluated.%20Our%20proactive%20study%20warns%20of%20the%20emergence%20and%20proliferation%20of%20deep%20fakes%20in%20geography%20just%20as%20%5Cu201clies%5Cu201d%20in%20maps.%20We%20suggest%20timely%20detections%20of%20deep%20fakes%20in%20geospatial%20data%20and%20proper%20coping%20strategies%20when%20necessary.%20More%20importantly%2C%20it%20is%20encouraged%20to%20cultivate%20a%20critical%20geospatial%20data%20literacy%20and%20thus%20to%20understand%20the%20multi-faceted%20impacts%20of%20deep%20fake%20geography%20on%20individuals%20and%20human%20society.%22%2C%22date%22%3A%222021-07-04%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2021.1910075%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2021.1910075%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-19T19%3A00%3A34Z%22%7D%7D%2C%7B%22key%22%3A%22GBW2ZIMN%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Chen%20et%20al.%22%2C%22parsedDate%22%3A%222021-05%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BChen%2C%20X.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9200723%26%23039%3B%26gt%3BSMAPGAN%3A%20Generative%20Adversarial%20Network-Based%20Semisupervised%20Styled%20Map%20Tile%20Generation%20Method%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22SMAPGAN%3A%20Generative%20Adversarial%20Network-Based%20Semisupervised%20Styled%20Map%20Tile%20Generation%20Method%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xu%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Songqiang%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tian%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bangguo%22%2C%22lastName%22%3A%22Yin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jian%22%2C%22lastName%22%3A%22Peng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiaoming%22%2C%22lastName%22%3A%22Mei%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haifeng%22%2C%22lastName%22%3A%22Li%22%7D%5D%2C%22abstractNote%22%3A%22Traditional%20online%20map%20tiles%2C%20which%20are%20widely%20used%20on%20the%20Internet%2C%20such%20as%20by%20Google%20Maps%20and%20Baidu%20Maps%2C%20are%20rendered%20from%20vector%20data.%20The%20timely%20updating%20of%20online%20map%20tiles%20from%20vector%20data%2C%20for%20which%20generation%20is%20time-consuming%2C%20is%20a%20difficult%20mission.%20Generating%20map%20tiles%20over%20time%20from%20remote%20sensing%20images%20is%20relatively%20simple%20and%20can%20be%20performed%20quickly%20without%20vector%20data.%20However%2C%20this%20approach%20used%20to%20be%20challenging%20or%20even%20impossible.%20Inspired%20by%20image-to-image%20translation%20%28img2img%29%20techniques%20based%20on%20generative%20adversarial%20networks%20%28GANs%29%2C%20we%20proposed%20a%20semisupervised%20generation%20of%20styled%20map%20tiles%20based%20on%20the%20GANs%20%28SMAPGAN%29%20model%20to%20generate%20styled%20map%20tiles%20directly%20from%20remote%20sensing%20images.%20In%20this%20model%2C%20we%20designed%20a%20semisupervised%20learning%20strategy%20to%20pretrain%20SMAPGAN%20on%20rich%20unpaired%20samples%20and%20fine-tune%20it%20on%20limited%20paired%20samples%20in%20reality.%20We%20also%20designed%20the%20image%20gradient%20L1%20loss%20and%20the%20image%20gradient%20structure%20loss%20to%20generate%20a%20styled%20map%20tile%20with%20global%20topological%20relationships%20and%20detailed%20edge%20curves%20for%20objects%2C%20which%20are%20important%20in%20cartography.%20Moreover%2C%20we%20proposed%20the%20edge%20structural%20similarity%20index%20%28ESSI%29%20as%20a%20metric%20to%20evaluate%20the%20quality%20of%20the%20topological%20consistency%20between%20the%20generated%20map%20tiles%20and%20ground%20truth.%20The%20experimental%20results%20show%20that%20SMAPGAN%20outperforms%20state-of-the-art%20%28SOTA%29%20works%20according%20to%20the%20mean%20squared%20error%2C%20the%20structural%20similarity%20index%2C%20and%20the%20ESSI.%20Also%2C%20SMAPGAN%20gained%20higher%20approval%20than%20SOTA%20in%20a%20human%20perceptual%20test%20on%20the%20visual%20realism%20of%20cartography.%20Our%20work%20shows%20that%20SMAPGAN%20is%20a%20new%20tool%20with%20excellent%20potential%20for%20producing%20styled%20map%20tiles.%20Our%20implementation%20of%20SMAPGAN%20is%20available%20at%20https%3A%5C%2F%5C%2Fgithub.com%5C%2Fimcsq%5C%2FSMAPGAN.%22%2C%22date%22%3A%222021-05%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FTGRS.2020.3021819%22%2C%22ISSN%22%3A%221558-0644%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9200723%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A20%3A08Z%22%7D%7D%2C%7B%22key%22%3A%22RNZTJYUW%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%22%2C%22parsedDate%22%3A%222019-11-05%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLi%2C%20Z.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3347146.3363463%26%23039%3B%26gt%3BGenerating%20Historical%20Maps%20from%20Online%20Maps%26lt%3B%5C%2Fa%26gt%3B.%202019%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Generating%20Historical%20Maps%20from%20Online%20Maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zekun%22%2C%22lastName%22%3A%22Li%22%7D%5D%2C%22abstractNote%22%3A%22This%20paper%20proposes%20an%20automatic%20system%20to%20generate%20a%20large%20amount%20of%20data%20for%20the%20training%20of%20text%20detection%20systems%20for%20historical%20maps.%20The%20system%20takes%20online%20maps%20as%20input%20and%20learns%20a%20conditional%20GAN%20model%2C%20to%20generate%20realistic%20historical%20map%20images%20from%20existing%20geographic%20datasets.%20Then%20the%20system%20uses%20the%20generated%20images%20as%20the%20base%20map%20and%20inserts%20synthetic%20text.%20Since%20the%20system%20has%20the%20control%20of%20text%20content%2C%20font%20style%2C%20and%20location%2C%20the%20system%20can%20obtain%20ground%20truth%20information%20%28minimum%20bounding%20boxes%29%20of%20the%20synthetic%20text.%20To%20overcome%20the%20challenge%20of%20content%20mismatch%2C%20the%20proposed%20system%20uses%20a%20novel%20loss%20function%20to%20encourage%20the%20generation%20of%20historical%20cartographic%20symbols%20in%20the%20foreground%20areas%20and%20discourage%20the%20generation%20in%20the%20background.%20The%20final%20output%20is%20a%20set%20of%20images%20resembling%20historical%20maps%20and%20the%20minimum%20bounding%20boxes%20around%20text%20regions%20on%20the%20images%20as%20annotations.%22%2C%22date%22%3A%22November%205%2C%202019%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%2027th%20ACM%20SIGSPATIAL%20International%20Conference%20on%20Advances%20in%20Geographic%20Information%20Systems%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3347146.3363463%22%2C%22ISBN%22%3A%22978-1-4503-6909-1%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3347146.3363463%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-19T19%3A03%3A54Z%22%7D%7D%2C%7B%22key%22%3A%22XZ5Q378R%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Bogucka%20and%20Meng%22%2C%22parsedDate%22%3A%222019-07-10%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBogucka%2C%20E.P.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.proc-int-cartogr-assoc.net%5C%2F2%5C%2F9%5C%2F2019%5C%2F%26%23039%3B%26gt%3BProjecting%20emotions%20from%20artworks%20to%20maps%20using%20neural%20style%20transfer%26lt%3B%5C%2Fa%26gt%3B.%202019%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Projecting%20emotions%20from%20artworks%20to%20maps%20using%20neural%20style%20transfer%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Edyta%20P.%22%2C%22lastName%22%3A%22Bogucka%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Liqiu%22%2C%22lastName%22%3A%22Meng%22%7D%5D%2C%22abstractNote%22%3A%22Recent%20advances%20in%20deep%20learning%20have%20facilitated%20the%20exchange%20of%20styles%20and%20textures%20between%20input%20images%20to%20create%20unique%20synthesised%20outputs.%20This%20paper%20assesses%20the%20applicability%20of%20neural%20style%20transfer%20to%20cartography%20and%20evaluates%20to%20what%20degree%20emotions%20attached%20to%20input%20images%20can%20be%20preserved%20in%20maps%20co-created%20by%20human%20and%20algorithm.%20As%20a%20source%20of%20emotions%20we%20utilized%20personal%20paintings%20created%20during%20a%20workshop%20with%20international%20artists%20at%20the%20School%20of%20Machines%2C%20Making%20%26amp%3B%20Make-Believe%20in%20August%202018.%20The%20neural%20style%20transfer%20was%20used%20as%20a%20tool%20to%20transfer%20the%20characteristics%20of%20the%20artworks%20onto%20the%20map.%20Differences%20in%20emotion%20perception%20between%20human-made%20textures%20and%20generated%20maps%20were%20assessed%20with%20an%20online%20survey%20completed%20by%201187%20users.%20The%20results%20confirmed%20that%20emotional%20descriptions%20remain%20the%20same%20before%20and%20after%20the%20procedure%20of%20neural%20style%20transfer.%20The%20users%20perceived%20artificially%20generated%20maps%20as%20interesting%20and%20visually%20pleasing%20artefacts.%20Artworks%20with%20variety%20of%20line%2C%20point%20and%20surface%20depictions%20were%20the%20most%20suitable%20algorithm%20inputs%20and%20achieved%20better%20visual%20results%20in%20representing%20the%20map%20content.%20After%20analysing%20the%20neural%20style%20transfer%20technique%20and%20identifying%20its%20limitations%20for%20cartographic%20style%20and%20map%20content%2C%20we%20conclude%20with%20plausible%20directions%20for%20future%20research.%22%2C%22date%22%3A%222019%5C%2F07%5C%2F10%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.5194%5C%2Fica-proc-2-9-2019%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.proc-int-cartogr-assoc.net%5C%2F2%5C%2F9%5C%2F2019%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-07-31T16%3A20%3A30Z%22%7D%7D%2C%7B%22key%22%3A%225DVP367W%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Kang%20et%20al.%22%2C%22parsedDate%22%3A%222019-05-04%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKang%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F23729333.2019.1615729%26%23039%3B%26gt%3BTransferring%20multiscale%20map%20styles%20using%20generative%20adversarial%20networks%26lt%3B%5C%2Fa%26gt%3B.%202019%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Transferring%20multiscale%20map%20styles%20using%20generative%20adversarial%20networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yuhao%22%2C%22lastName%22%3A%22Kang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Song%22%2C%22lastName%22%3A%22Gao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robert%20E.%22%2C%22lastName%22%3A%22Roth%22%7D%5D%2C%22abstractNote%22%3A%22The%20advancement%20of%20the%20Artificial%20Intelligence%20%28AI%29%20technologies%20makes%20it%20possible%20to%20learn%20stylistic%20design%20criteria%20from%20existing%20maps%20or%20other%20visual%20art%20and%20transfer%20these%20styles%20to%20make%20new%20digital%20maps.%20In%20this%20paper%2C%20we%20propose%20a%20novel%20framework%20using%20AI%20for%20map%20style%20transfer%20applicable%20across%20multiple%20map%20scales.%20Specifically%2C%20we%20identify%20and%20transfer%20the%20stylistic%20elements%20from%20a%20target%20group%20of%20visual%20examples%2C%20including%20Google%20Maps%2C%20OpenStreetMap%2C%20and%20artistic%20paintings%2C%20to%20unstylized%20GIS%20vector%20data%20through%20two%20generative%20adversarial%20network%20%28GAN%29%20models.%20We%20then%20train%20a%20binary%20classifier%20based%20on%20a%20deep%20convolutional%20neural%20network%20to%20evaluate%20whether%20the%20transfer%20styled%20map%20images%20preserve%20the%20original%20map%20design%20characteristics.%20Our%20experiment%20results%20show%20that%20GANs%20have%20great%20potential%20for%20multiscale%20map%20style%20transferring%2C%20but%20many%20challenges%20remain%20requiring%20future%20research.%22%2C%22date%22%3A%222019-05-04%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F23729333.2019.1615729%22%2C%22ISSN%22%3A%222372-9333%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F23729333.2019.1615729%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-08-05T22%3A26%3A53Z%22%7D%7D%2C%7B%22key%22%3A%22X5HDH45F%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Isola%20et%20al.%22%2C%22parsedDate%22%3A%222017%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BIsola%2C%20P.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.computer.org%5C%2Fcsdl%5C%2Fproceedings-article%5C%2Fcvpr%5C%2F2017%5C%2F0457f967%5C%2F12OmNx965Bx%26%23039%3B%26gt%3BImage-to-Image%20Translation%20with%20Conditional%20Adversarial%20Networks%26lt%3B%5C%2Fa%26gt%3B.%202017%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Image-to-Image%20Translation%20with%20Conditional%20Adversarial%20Networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Phillip%22%2C%22lastName%22%3A%22Isola%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jun-Yan%22%2C%22lastName%22%3A%22Zhu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tinghui%22%2C%22lastName%22%3A%22Zhou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Alexei%20A.%22%2C%22lastName%22%3A%22Efros%22%7D%5D%2C%22abstractNote%22%3A%22We%20investigate%20conditional%20adversarial%20networks%20as%20a%20general-purpose%20solution%20to%20image-to-image%20translation%20problems.%20These%20networks%20not%20only%20learn%20the%20mapping%20from%20input%20image%20to%20output%20image%2C%20but%20also%20learn%20a%20loss%20function%20to%20train%20this%20mapping.%20This%20makes%20it%20possible%20to%20apply%20the%20same%20generic%20approach%20to%20problems%20that%20traditionally%20would%20require%20very%20different%20loss%20formulations.%20We%20demonstrate%20that%20this%20approach%20is%20effective%20at%20synthesizing%20photos%20from%20label%20maps%2C%20reconstructing%20objects%20from%20edge%20maps%2C%20and%20colorizing%20images%2C%20among%20other%20tasks.%20Moreover%2C%20since%20the%20release%20of%20the%20pi%5Cu00d72pi%5Cu00d7%20software%20associated%20with%20this%20paper%2C%20hundreds%20of%20twitter%20users%20have%20posted%20their%20own%20artistic%20experiments%20using%20our%20system.%20As%20a%20community%2C%20we%20no%20longer%20hand-engineer%20our%20mapping%20functions%2C%20and%20this%20work%20suggests%20we%20can%20achieve%20reasonable%20results%20without%20handengineering%20our%20loss%20functions%20either.%22%2C%22date%22%3A%222017%22%2C%22proceedingsTitle%22%3A%222017%20IEEE%20Conference%20on%20Computer%20Vision%20and%20Pattern%20Recognition%20%28CVPR%29%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.1109%5C%2FCVPR.2017.632%22%2C%22ISBN%22%3A%22978-1-5386-0457-1%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.computer.org%5C%2Fcsdl%5C%2Fproceedings-article%5C%2Fcvpr%5C%2F2017%5C%2F0457f967%5C%2F12OmNx965Bx%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A01%3A52Z%22%7D%7D%5D%7D
Wu, S. et al. StegoGAN: Leveraging Steganography for Non-Bijective Image-to-Image Translation. 2024
Wu, A.N. et al. GANmapper: geographical data translation. 2022
Christophe, S. et al. Neural map style transfer exploration with GANs. 2022
Zhao, B. et al. Deep fake geography? When geospatial data encounter Artificial Intelligence. 2021
Li, Z. Generating Historical Maps from Online Maps. 2019
Bogucka, E.P. et al. Projecting emotions from artworks to maps using neural style transfer. 2019
Kang, Y. et al. Transferring multiscale map styles using generative adversarial networks. 2019
Isola, P. et al. Image-to-Image Translation with Conditional Adversarial Networks. 2017
Generalization (Symbols / Points)
5447768
generalization, symbols
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22FEDW73JS%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Xiao%20et%20al.%22%2C%22parsedDate%22%3A%222025-06-09%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BXiao%2C%20T.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fagile-giss.copernicus.org%5C%2Farticles%5C%2F6%5C%2F13%5C%2F2025%5C%2F%26%23039%3B%26gt%3BMap%20Generalization%20Method%20Supported%20by%20Graph%20Convolutional%20Networks%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Map%20Generalization%20Method%20Supported%20by%20Graph%20Convolutional%20Networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tianyuan%22%2C%22lastName%22%3A%22Xiao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tinghua%22%2C%22lastName%22%3A%22Ai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dirk%22%2C%22lastName%22%3A%22Burghardt%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pengcheng%22%2C%22lastName%22%3A%22Liu%22%7D%5D%2C%22abstractNote%22%3A%22Map%20generalization%20has%20always%20been%20a%20key%20research%20issue%20in%20cartography.%20With%20the%20continuous%20development%20of%20the%20information%20age%2C%20massive%20amounts%20of%20map%20data%20are%20being%20generated%2C%20and%20how%20to%20effectively%20achieve%20multi-scale%20representation%20of%20large-volume%20vector%20data%20of%20various%20types%20has%20become%20a%20pressing%20challenge.%20Traditional%20methods%20of%20map%20generalization%2C%20which%20rely%20heavily%20on%20human-specified%20rules%20and%20set%20thresholds%2C%20tend%20to%20be%20complex%20and%20inefficient.%20Furthermore%2C%20they%20are%20often%20significantly%20influenced%20by%20the%20subjective%20factors%20of%20cartographers.%20To%20address%20these%20challenges%2C%20this%20study%20introduces%20graph-based%20deep%20learning%20techniques%20into%20the%20field%20of%20map%20generalization.%20Tailored%20generalization%20strategies%20were%20designed%20for%20point%20features%2C%20polyline%20features%2C%20and%20polygon%20features%2C%20enabling%20this%20data-driven%20approach%20to%20facilitate%20map%20generalization%20tasks%20from%20different%20perspectives.%20A%20comprehensive%20map%20generalization%20framework%20was%20developed%20for%20various%20feature%20types%20by%20integrating%20domain%20knowledge%20with%20data-driven%20techniques.%20This%20framework%20includes%20the%20construction%20of%20graph%20structures%20for%20different%20geographic%20objects%2C%20the%20extraction%20of%20feature%20vectors%2C%20and%20the%20design%20of%20deep%20learning%20network%20models.%20Experimental%20results%20demonstrate%20that%20the%20proposed%20method%20delivers%20good%20visual%20performance%20while%20preserving%20the%20various%20characteristics%20of%20the%20original%20map%20during%20the%20generalization%20process.%22%2C%22date%22%3A%222025-06-09%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.5194%5C%2Fagile-giss-6-13-2025%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fagile-giss.copernicus.org%5C%2Farticles%5C%2F6%5C%2F13%5C%2F2025%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-08-15T13%3A57%3A20Z%22%7D%7D%2C%7B%22key%22%3A%22CUVS899N%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yan%20et%20al.%22%2C%22parsedDate%22%3A%222025%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYan%2C%20X.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10095020.2025.2480815%26%23039%3B%26gt%3BDeep%20learning%20in%20automatic%20map%20generalization%3A%20achievements%20and%20challenges%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep%20learning%20in%20automatic%20map%20generalization%3A%20achievements%20and%20challenges%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiongfeng%22%2C%22lastName%22%3A%22Yan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tinghua%22%2C%22lastName%22%3A%22Ai%22%7D%5D%2C%22abstractNote%22%3A%22Map%20generalization%20has%20always%20been%20a%20hot%20topic%20in%20the%20field%20of%20Geographic%20Information%20Science%20%28GIS%29%20over%20the%20past%20decades.%20Scholars%20have%20been%20dedicated%20to%20utilizing%20opportunities%20offered%20by%20technological%20advancements%20to%20drive%20the%20rapid%20transformation%20of%20map%20generalization%20from%20manual%20to%20interactive%20modes%2C%20with%20an%20extension%20toward%20automatic%20mode.%20Deep%20Learning%20%28DL%29%2C%20known%20for%20powerful%20data-processing%20and%20pattern%20recognition%20capabilities%2C%20has%20introduced%20new%20possibilities%20for%20automatic%20map%20generalization.%20Novel%20studies%20eagerly%20adopt%20DL%20methods%20and%20explore%20their%20mechanisms%20to%20enhance%20the%20level%20of%20automation%20of%20map%20generalization.%20However%2C%20current%20research%20on%20this%20topic%20remains%20relatively%20scattered%20and%20thus%20a%20systematic%20summary%20and%20in-depth%20analysis%20are%20required.%20This%20study%20presents%20an%20overview%20of%20the%20achievements%20in%20addressing%20map%20generalization%20task%20using%20DL%2C%20with%20emphasis%20on%20the%20progress%20in%20the%20past%20five%20years%2C%20covering%20the%20aspects%20of%20pattern%20recognition%2C%20algorithm%20design%2C%20process%20control%2C%20and%20result%20evaluation.%20Furthermore%2C%20we%20examined%20the%20latest%20development%20trends%20of%20advanced%20DL%20methods%2C%20specifically%20large%20models%20%28LMs%29%2C%20in%20the%20context%20of%20map%20generalization%20and%20identified%20potential%20future%20research%20directions.%20We%20anticipate%20that%20this%20work%20will%20catalyze%20a%20new%20wave%20of%20technological%20advancements%20in%20the%20field%20of%20automatic%20map%20generalization.%22%2C%22date%22%3A%222025%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F10095020.2025.2480815%22%2C%22ISSN%22%3A%221009-5020%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10095020.2025.2480815%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-12T22%3A28%3A37Z%22%7D%7D%2C%7B%22key%22%3A%22CVK7CL43%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Xiao%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BXiao%2C%20T.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2187886%26%23039%3B%26gt%3BA%20point%20selection%20method%20in%20map%20generalization%20using%20graph%20convolutional%20network%20model%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20point%20selection%20method%20in%20map%20generalization%20using%20graph%20convolutional%20network%20model%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tianyuan%22%2C%22lastName%22%3A%22Xiao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tinghua%22%2C%22lastName%22%3A%22Ai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Huafei%22%2C%22lastName%22%3A%22Yu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pengcheng%22%2C%22lastName%22%3A%22Liu%22%7D%5D%2C%22abstractNote%22%3A%22For%20point%20clusters%2C%20the%20conflict%20and%20crowding%20of%20map%20symbols%20is%20an%20inevitable%20problem%20during%20the%20transition%20from%20large%20to%20small%20scales.%20The%20cartographic%20generalization%20involved%20in%20this%20problem%20as%20a%20spatial%20decision-making%20process%20is%20usually%20related%20to%20the%20analysis%20of%20spatial%20context%2C%20the%20choice%20of%20abstraction%20operators%2C%20and%20the%20judgment%20of%20the%20resulting%20data%20quality.%20The%20rules%20summarized%20by%20traditional%20generalization%20methods%20usually%20require%20manual%20setting%20of%20conditions%20or%20thresholds%20and%20sometimes%20encounter%20special%20cases%20that%20make%20it%20difficult%20to%20directly%20match%20certain%20rules%20or%20integrate%20different%20rules%20together.%20An%20alternative%20method%20is%20using%20a%20data-driven%20strategy%20under%20AI%20technology%20background%20to%20simulate%20cartographer%20behaviors%20through%20typical%20sample%20training%2C%20such%20as%20deep%20learning.%20The%20integration%20of%20cartography%20domain%20knowledge%20and%20deep%20learning%20is%20a%20better%20choice%20to%20settle%20generalization%20decisions.%20This%20study%20uses%20a%20combination%20of%20domain%20knowledge%20and%20a%20data-driven%20approach%20to%20introduce%20graph%20neural%20networks%20into%20point%20cluster%20generalization.%20First%2C%20we%20construct%20a%20virtual%20graph%20structure%20of%20point%20clusters%20using%20Delaunay%20triangulation%2C%20secondly%2C%20we%20extract%20spatial%20features%2C%20contextual%20features%2C%20and%20attributes%20of%20each%20point%20separately%2C%20and%20then%20propose%20a%20generalization%20model%20based%20on%20the%20TAGCN%20network.%20Finally%2C%20this%20model%20is%20trained%20with%20the%20manually%20generalized%20sample%20to%20realize%20the%20automatic%20point%20cluster%20generalization.%20The%20results%20demonstrate%20that%20the%20proposed%20model%20is%20valid%20and%20efficient%20for%20point%20cluster%20generalization%20and%20that%20this%20algorithm%20can%20better%20maintain%20various%20characteristics%20of%20the%20point%20cluster%20in%20both%20the%20local%20area%20and%20the%20overall%20map%20compared%20to%20other%20methods.%22%2C%22date%22%3A%222024-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2023.2187886%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2187886%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-18T12%3A52%3A05Z%22%7D%7D%2C%7B%22key%22%3A%227RKNWCGZ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Xie%20et%20al.%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BXie%2C%20H.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FACCESS.2022.3182497%26%23039%3B%26gt%3BA%20Graph%20Neural%20Network-Based%20Map%20Tiles%20Extraction%20Method%20Considering%20POIs%20Priority%20Visualization%20on%20Web%20Map%20Zoom%20Dimension%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20Graph%20Neural%20Network-Based%20Map%20Tiles%20Extraction%20Method%20Considering%20POIs%20Priority%20Visualization%20on%20Web%20Map%20Zoom%20Dimension%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Huaze%22%2C%22lastName%22%3A%22Xie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Da%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yuanyuan%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yukiko%22%2C%22lastName%22%3A%22Kawai%22%7D%5D%2C%22abstractNote%22%3A%22Owing%20to%20the%20tremendous%20popularity%20of%20mobile%20networks%2C%20point-of-interest%20%28POI%29%20data%20of%20location-based%20social%20networks%20%28LBSN%29%20provide%20significant%20geographic%20information%20on%20maps%20and%20can%20be%20utilized%20to%20discuss%20the%20dynamic%20characteristics%20of%20map%20tiles%20as%20segmented%20by%20city%20roads.%20In%20this%20study%2C%20to%20implement%20dynamic%20characteristic%20analysis%20of%20the%20map%20tile%2C%20we%20propose%20a%20spatial-zoom%20graph-attention%20model%20%28SZ-GAT%29%20based%20on%20a%20global-attention%20mechanism%20and%205-category%20POI%20attributes%20for%20each%20map%20tile%20zoom%20dimension.%20Furthermore%2C%20a%20social-media%20dataset%20%28Twitter%20with%20geolocation%29%20is%20utilized%20to%20promote%20POI%20visualization%20at%20different%20zoom%20levels%20and%20improve%20the%20aggregation%20efficiency%20of%20geographic%20records%20in%20zoom%20dimensions.%20In%20the%20experiments%2C%20we%20extract%20POI%20geo-features%20from%20Twitter%20and%20display%20the%20user%5Cu2019s%20favorite%20POI%20features%20at%20each%20map%20zooming%20level%20with%205-dimensional%20tweet%20attributes.%20We%20evaluate%20the%20accuracy%20of%20the%20POI%20prediction%20on%20Google%2C%20OpenStreetMap%2C%20Bing%2C%20and%20Yahoo%21%20maps%20by%20comparing%20the%20tweets%5Cu2019%20visit%20history.%20The%20predictive%20performance%20of%20the%20proposed%20method%20is%20more%20than%2056%25%20for%20each%20zoom%20level%20on%2060%20randomly-selected%20map%20tiles%20in%20Kyoto%20City.%22%2C%22date%22%3A%222022%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FACCESS.2022.3182497%22%2C%22ISSN%22%3A%222169-3536%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FACCESS.2022.3182497%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-21T13%3A44%3A16Z%22%7D%7D%5D%7D
Xiao, T. et al. Map Generalization Method Supported by Graph Convolutional Networks. 2025
Yan, X. et al. Deep learning in automatic map generalization: achievements and challenges. 2025
Xiao, T. et al. A point selection method in map generalization using graph convolutional network model. 2024
Generalization (Lines)
5447768
generalization, lines
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22UDN6VN4J%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zheng%20et%20al.%22%2C%22parsedDate%22%3A%222025-12-31%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZheng%2C%20H.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10106049.2025.2471108%26%23039%3B%26gt%3BQuantum%20neural%20network-based%20approach%20for%20optimizing%20road%20network%20selection%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Quantum%20neural%20network-based%20approach%20for%20optimizing%20road%20network%20selection%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haohua%22%2C%22lastName%22%3A%22Zheng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Heying%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jianchen%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guangxia%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jianzhong%22%2C%22lastName%22%3A%22Guo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiayao%22%2C%22lastName%22%3A%22Wang%22%7D%5D%2C%22abstractNote%22%3A%22With%20advancements%20in%20neural%20networks%2C%20intelligent%20road%20network%20selection%20methods%20have%20become%20a%20key%20research%20area.%20However%2C%20the%20expanding%20scale%20of%20road%20networks%20has%20led%20to%20concerns%20regarding%20model%20training%20efficiency%20and%20resource%20consumption.%20Quantum%20neural%20networks%2C%20leveraging%20their%20unique%20properties%20of%20superposition%20and%20entanglement%2C%20present%20remarkable%20advantages%20for%20handling%20large-scale%2C%20complex%2C%20and%20nonlinear%20data.%20We%20propose%20a%20novel%20framework%20for%20road%20network%20selection%20based%20on%20quantum%20neural%20networks.%20We%20design%20a%20comprehensive%20feature%20set%20that%20accounts%20for%20various%20factors%2C%20including%20terrain%2C%20settlements%2C%20and%20surrounding%20density.%20Our%20study%20delves%20into%20the%20impact%20of%20feature%20encoding%20methods%20and%20circuit%20structures%20on%20the%20performance%20of%20quantum%20neural%20networks%20in%20road%20selection.%20It%20evaluates%20the%20proposed%20model%5Cu2019s%20performance%20across%20different%20scales%2C%20regions%2C%20and%20data%20volumes.%20The%20results%20demonstrate%20the%20feasibility%20and%20effectiveness%20of%20our%20approach%20when%20compared%20to%20existing%20classical%20neural%20network%20models%2C%20offering%20a%20promising%20solution%20for%20large-scale%20road%20network%20selection.%22%2C%22date%22%3A%222025-12-31%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F10106049.2025.2471108%22%2C%22ISSN%22%3A%221010-6049%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10106049.2025.2471108%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-28T17%3A17%3A08Z%22%7D%7D%2C%7B%22key%22%3A%22AHCXNIM8%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wang%20et%20al.%22%2C%22parsedDate%22%3A%222025-08-25%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWang%2C%20D.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F17538947.2025.2495736%26%23039%3B%26gt%3BAn%20intelligent%20simplification%20method%20for%20river%20networks%20with%20an%20unsupervised%20variational%20autoencoder%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22An%20intelligent%20simplification%20method%20for%20river%20networks%20with%20an%20unsupervised%20variational%20autoencoder%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Di%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haizhong%22%2C%22lastName%22%3A%22Qian%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiao%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Limin%22%2C%22lastName%22%3A%22Xie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yue%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Linghui%22%2C%22lastName%22%3A%22Kong%22%7D%5D%2C%22abstractNote%22%3A%22Intelligent%20simplification%20of%20river%20networks%20is%20an%20important%20part%20in%20map%20generalisation.%20Traditional%20rule-based%20methods%20often%20have%20limitations%2C%20such%20as%20relying%20on%20the%20determination%20of%20parameters%20and%20thresholds.%20This%20paper%20describes%20the%20utilisation%20of%20the%20adaptive%20characteristics%20and%20powerful%20learning%20and%20representation%20capabilities%20of%20the%20variational%20autoencoder%20model%20to%20achieve%20intelligent%20simplification%20of%20river%20networks.%20The%20original%20river%20network%20data%20was%20sampled%20considering%20the%20characteristics%20of%20river%20networks%2C%20such%20as%20topological%20relationships%2C%20primary-secondary%20relationships%20and%20river%20bend%20curvatures.%20The%20sampled%20data%20was%20rasterised%20and%20input%20into%20the%20Encoder%20module.%20The%20Encoder%20extracted%20features%20from%20the%20images%20and%20mapped%20them%20to%20the%20latent%20space.%20Finally%2C%20the%20Decoder%20decoded%20the%20samples%2C%20mapped%20the%20latent%20variables%20back%20to%20the%20dimensions%20and%20distributions%20of%20the%20original%20data%2C%20and%20reconstructed%20the%20data%20as%20close%20as%20possible%20to%20the%20inputs%20and%20the%20river%20network%20based%20on%20the%20target%20scale.%20The%20experimental%20results%20showed%20that%20compared%20with%20the%20classical%20Douglas-Peucker%2C%20Wang-M%5Cu00fcller%2C%20and%20Visvalingam-Whyatt%20algorithms%2C%20this%20method%20was%20superior%20in%20terms%20of%20preserving%20the%20overall%20structure%2C%20position%2C%20shape%2C%20and%20local%20morphology%20of%20the%20simplified%20river%20network.%22%2C%22date%22%3A%222025-08-25%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1080%5C%2F17538947.2025.2495736%22%2C%22ISSN%22%3A%221753-8947%2C%201753-8955%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F17538947.2025.2495736%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-12T12%3A13%3A52Z%22%7D%7D%2C%7B%22key%22%3A%22FEDW73JS%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Xiao%20et%20al.%22%2C%22parsedDate%22%3A%222025-06-09%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BXiao%2C%20T.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fagile-giss.copernicus.org%5C%2Farticles%5C%2F6%5C%2F13%5C%2F2025%5C%2F%26%23039%3B%26gt%3BMap%20Generalization%20Method%20Supported%20by%20Graph%20Convolutional%20Networks%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Map%20Generalization%20Method%20Supported%20by%20Graph%20Convolutional%20Networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tianyuan%22%2C%22lastName%22%3A%22Xiao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tinghua%22%2C%22lastName%22%3A%22Ai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dirk%22%2C%22lastName%22%3A%22Burghardt%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pengcheng%22%2C%22lastName%22%3A%22Liu%22%7D%5D%2C%22abstractNote%22%3A%22Map%20generalization%20has%20always%20been%20a%20key%20research%20issue%20in%20cartography.%20With%20the%20continuous%20development%20of%20the%20information%20age%2C%20massive%20amounts%20of%20map%20data%20are%20being%20generated%2C%20and%20how%20to%20effectively%20achieve%20multi-scale%20representation%20of%20large-volume%20vector%20data%20of%20various%20types%20has%20become%20a%20pressing%20challenge.%20Traditional%20methods%20of%20map%20generalization%2C%20which%20rely%20heavily%20on%20human-specified%20rules%20and%20set%20thresholds%2C%20tend%20to%20be%20complex%20and%20inefficient.%20Furthermore%2C%20they%20are%20often%20significantly%20influenced%20by%20the%20subjective%20factors%20of%20cartographers.%20To%20address%20these%20challenges%2C%20this%20study%20introduces%20graph-based%20deep%20learning%20techniques%20into%20the%20field%20of%20map%20generalization.%20Tailored%20generalization%20strategies%20were%20designed%20for%20point%20features%2C%20polyline%20features%2C%20and%20polygon%20features%2C%20enabling%20this%20data-driven%20approach%20to%20facilitate%20map%20generalization%20tasks%20from%20different%20perspectives.%20A%20comprehensive%20map%20generalization%20framework%20was%20developed%20for%20various%20feature%20types%20by%20integrating%20domain%20knowledge%20with%20data-driven%20techniques.%20This%20framework%20includes%20the%20construction%20of%20graph%20structures%20for%20different%20geographic%20objects%2C%20the%20extraction%20of%20feature%20vectors%2C%20and%20the%20design%20of%20deep%20learning%20network%20models.%20Experimental%20results%20demonstrate%20that%20the%20proposed%20method%20delivers%20good%20visual%20performance%20while%20preserving%20the%20various%20characteristics%20of%20the%20original%20map%20during%20the%20generalization%20process.%22%2C%22date%22%3A%222025-06-09%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.5194%5C%2Fagile-giss-6-13-2025%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fagile-giss.copernicus.org%5C%2Farticles%5C%2F6%5C%2F13%5C%2F2025%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-08-15T13%3A57%3A20Z%22%7D%7D%2C%7B%22key%22%3A%223P2UHBAI%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Chen%20et%20al.%22%2C%22parsedDate%22%3A%222025-02-06%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BChen%2C%20S.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F14%5C%2F2%5C%2F64%26%23039%3B%26gt%3BVE-GCN%3A%20A%20Geography-Aware%20Approach%20for%20Polyline%20Simplification%20in%20Cartographic%20Generalization%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22VE-GCN%3A%20A%20Geography-Aware%20Approach%20for%20Polyline%20Simplification%20in%20Cartographic%20Generalization%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Siqiong%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Anna%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yongyang%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haitao%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhong%22%2C%22lastName%22%3A%22Xie%22%7D%5D%2C%22abstractNote%22%3A%22Polyline%20simplification%20is%20a%20critical%20process%20in%20cartographic%20generalization%2C%20but%20the%20existing%20methods%20often%20fall%20short%20in%20considering%20the%20overall%20geographic%20morphology%20or%20local%20edge%20and%20vertex%20information%20of%20polylines.%20To%20enhance%20the%20graph%20convolutional%20structure%20for%20capturing%20crucial%20geographic%20element%20features%20and%20simultaneously%20learning%20vertex%20and%20edge%20features%20within%20map%20polylines%2C%20this%20study%20introduces%20a%20joint%20vertex%5Cu2013edge%20feature%20graph%20convolutional%20network%20%28VE-GCN%29.%20The%20VE-GCN%20extends%20the%20graph%20convolutional%20operator%20from%20vertex%20features%20to%20edge%20features%20and%20integrates%20edge%20and%20vertex%20features%20through%20a%20feature%20transformation%20layer%2C%20enhancing%20the%20model%5Cu2019s%20capability%20to%20represent%20the%20shapes%20of%20polylines.%20To%20further%20improve%20this%20capability%2C%20the%20VE-GCN%20incorporates%20an%20architecture%20for%20retaining%20crucial%20geographic%20information.%20This%20architecture%20is%20composed%20of%20a%20structure%20for%20retaining%20local%20positional%20information%20and%20another%20for%20extracting%20multi-scale%20features.%20These%20components%20capture%20high%5Cu2013low%20dimensional%20and%20large%5Cu2013small%20scale%20features%2C%20contributing%20to%20polylines%5Cu2019%20comprehensive%20local%20and%20global%20representation.%20The%20experimental%20results%20on%20road%20and%20coastline%20datasets%20verified%20the%20effectiveness%20of%20the%20proposed%20network%20in%20maintaining%20the%20overall%20shape%20characteristics%20of%20simplified%20polylines.%20After%20fusing%20the%20edge%20features%2C%20the%20differential%20distance%20between%20the%20roads%20before%20and%20after%20simplification%20decreased%20from%201.06%20to%200.18.%20The%20network%20ensures%20invariant%20global%20spatial%20relationships%2C%20making%20the%20simplified%20data%20well%20suited%20for%20cartographic%20generalization%20applications%2C%20especially%20in%20simplifying%20vector%20map%20elements.%22%2C%22date%22%3A%222025-02-06%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi14020064%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F14%5C%2F2%5C%2F64%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-10T18%3A49%3A05Z%22%7D%7D%2C%7B%22key%22%3A%22QMGDRY5S%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Shuaidong%20et%20al.%22%2C%22parsedDate%22%3A%222025-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BShuaidong%2C%20J.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F01490419.2024.2421741%26%23039%3B%26gt%3BCooperative%20generalization%20of%20isobaths%20and%20coastlines%20based%20on%20a%20DQN-driven%20simplification%20scale%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Cooperative%20generalization%20of%20isobaths%20and%20coastlines%20based%20on%20a%20DQN-driven%20simplification%20scale%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jia%22%2C%22lastName%22%3A%22Shuaidong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Song%22%2C%22lastName%22%3A%22Zikang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhang%22%2C%22lastName%22%3A%22Lihua%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Liu%22%2C%22lastName%22%3A%22Yifan%22%7D%5D%2C%22abstractNote%22%3A%22Given%20the%20challenges%20of%20independent%20automatic%20generalization%20processes%20of%20isobaths%20and%20coastlines%20on%20nautical%20charts%2C%20along%20with%20the%20insufficient%20consideration%20of%20their%20mutual%20influence%2C%20which%20leads%20to%20these%20elements%20being%20too%20close%20or%20even%20intersecting%2C%20a%20cooperative%20generalization%20method%20for%20isobaths%20and%20coastlines%20is%20proposed.%20This%20method%20is%20based%20on%20a%20deep%20Q-network%20%28DQN%29-driven%20adaptive%20simplification%20scale.%20First%2C%20training%20samples%20are%20acquired%20for%20the%20cooperative%20generalization%20of%20coastlines%20and%20isobaths.%20Next%2C%20to%20investigate%20the%20reciprocal%20influence%20linking%20coastlines%20and%20isobaths%20during%20the%20cartography%20generalization%20process%2C%20a%20DQN%20model%20is%20built%20and%20trained.%20Finally%2C%20the%20trained%20model%20is%20applied%20to%20adaptively%20modify%20the%20coastline%20and%20isobath%20simplification%20method.%20The%20simple%20generalization%20method%20%28SG%20method%29%2C%20which%20does%20not%20consider%20elements%5Cu2019%20spatial%20relationships%2C%20and%20the%20triangulation%20partitioning%20method%20%28TNP%20method%29%2C%20which%20generalizes%20multiple%20lines%20by%20hierarchical%20triangulation%20network%20partition%2C%20were%20used%20for%20experimental%20comparison.%20The%20experimental%20results%20show%20that%20compared%20with%20the%20SG%20and%20TNP%20methods%2C%20the%20proposed%20DQN%20method%20can%20achieve%20more%20consistent%20and%20coordinated%20generalization%20results%2C%20making%20it%20more%20suitable%20for%20solving%20the%20problem%20of%20cooperative%20generalization%20of%20isobaths%20and%20coastlines%20in%20complex%20terrain%20conditions.%22%2C%22date%22%3A%222025-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F01490419.2024.2421741%22%2C%22ISSN%22%3A%220149-0419%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F01490419.2024.2421741%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-26T18%3A51%3A54Z%22%7D%7D%2C%7B%22key%22%3A%22CUVS899N%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yan%20et%20al.%22%2C%22parsedDate%22%3A%222025%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYan%2C%20X.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10095020.2025.2480815%26%23039%3B%26gt%3BDeep%20learning%20in%20automatic%20map%20generalization%3A%20achievements%20and%20challenges%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep%20learning%20in%20automatic%20map%20generalization%3A%20achievements%20and%20challenges%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiongfeng%22%2C%22lastName%22%3A%22Yan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tinghua%22%2C%22lastName%22%3A%22Ai%22%7D%5D%2C%22abstractNote%22%3A%22Map%20generalization%20has%20always%20been%20a%20hot%20topic%20in%20the%20field%20of%20Geographic%20Information%20Science%20%28GIS%29%20over%20the%20past%20decades.%20Scholars%20have%20been%20dedicated%20to%20utilizing%20opportunities%20offered%20by%20technological%20advancements%20to%20drive%20the%20rapid%20transformation%20of%20map%20generalization%20from%20manual%20to%20interactive%20modes%2C%20with%20an%20extension%20toward%20automatic%20mode.%20Deep%20Learning%20%28DL%29%2C%20known%20for%20powerful%20data-processing%20and%20pattern%20recognition%20capabilities%2C%20has%20introduced%20new%20possibilities%20for%20automatic%20map%20generalization.%20Novel%20studies%20eagerly%20adopt%20DL%20methods%20and%20explore%20their%20mechanisms%20to%20enhance%20the%20level%20of%20automation%20of%20map%20generalization.%20However%2C%20current%20research%20on%20this%20topic%20remains%20relatively%20scattered%20and%20thus%20a%20systematic%20summary%20and%20in-depth%20analysis%20are%20required.%20This%20study%20presents%20an%20overview%20of%20the%20achievements%20in%20addressing%20map%20generalization%20task%20using%20DL%2C%20with%20emphasis%20on%20the%20progress%20in%20the%20past%20five%20years%2C%20covering%20the%20aspects%20of%20pattern%20recognition%2C%20algorithm%20design%2C%20process%20control%2C%20and%20result%20evaluation.%20Furthermore%2C%20we%20examined%20the%20latest%20development%20trends%20of%20advanced%20DL%20methods%2C%20specifically%20large%20models%20%28LMs%29%2C%20in%20the%20context%20of%20map%20generalization%20and%20identified%20potential%20future%20research%20directions.%20We%20anticipate%20that%20this%20work%20will%20catalyze%20a%20new%20wave%20of%20technological%20advancements%20in%20the%20field%20of%20automatic%20map%20generalization.%22%2C%22date%22%3A%222025%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F10095020.2025.2480815%22%2C%22ISSN%22%3A%221009-5020%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10095020.2025.2480815%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-12T22%3A28%3A37Z%22%7D%7D%2C%7B%22key%22%3A%22QPHQMFXX%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yang%20et%20al.%22%2C%22parsedDate%22%3A%222025%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYang%2C%20M.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2025.2530060%26%23039%3B%26gt%3BExplainable%20artificial%20intelligence%20approach%20for%20road%20network%20selection%20based%20on%20a%20neural%20additive%20model%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Explainable%20artificial%20intelligence%20approach%20for%20road%20network%20selection%20based%20on%20a%20neural%20additive%20model%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiao%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Taiyang%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiongfeng%22%2C%22lastName%22%3A%22Yan%22%7D%5D%2C%22abstractNote%22%3A%22Road%20network%20selection%20is%20critical%20in%20map%20generalization.%20While%20machine%20learning-based%20models%20improve%20selection%20performance%2C%20their%20increased%20complexity%20reduces%20explainability%2C%20which%20complicates%20the%20explicit%20description%20of%20the%20relationship%20between%20input%20road%20features%20and%20selection%20decisions.%20To%20address%20this%20issue%2C%20we%20propose%20an%20explainable%20artificial%20intelligence%20%28XAI%29%20approach%20for%20road%20network%20selection%20that%20treats%20road%20strokes%20as%20processing%20units%20and%20extracts%20the%20descriptive%20features%20for%20each%20stroke.%20A%20neural%20additive%20model%20%28NAM%29%20that%20consists%20of%20several%20independent%20feature%20networks%20was%20used%20to%20analyze%20descriptive%20features%20to%20determine%20whether%20each%20stroke%20should%20be%20retained.%20The%20explainability%20of%20the%20XAI%20approach%20was%20driven%20by%20the%20learning%20of%20linearly%20combined%20feature%20networks%20in%20the%20NAM%2C%20which%20clearly%20differentiates%20the%20contribution%20of%20each%20stroke%5Cu2019s%20features%20to%20the%20final%20selection%20results.%20To%20balance%20explainability%20and%20performance%2C%20knowledge%20distillation%20was%20employed%20to%20transfer%20knowledge%20from%20a%20teacher%20model%20to%20the%20NAM%20student%20model.%20Experiments%20on%20datasets%20showed%20that%20the%20XAI%20approach%20achieved%20over%2087%25%20consistency%20with%20manual%20selection.%20Notably%2C%20it%20provided%20a%20mechanism%20for%20both%20global%20and%20local%20explainability%20analyses%2C%20thereby%20improving%20our%20understanding%20of%20why%20certain%20strokes%20are%20retained%20or%20deleted.%20These%20insights%20into%20the%20model%5Cu2019s%20decision-making%20process%20help%20advance%20the%20automation%20of%20map%20generalization.%22%2C%22date%22%3A%222025%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2025.2530060%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2025.2530060%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-19T19%3A11%3A05Z%22%7D%7D%2C%7B%22key%22%3A%22FB4W5KJL%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yu%20et%20al.%22%2C%22parsedDate%22%3A%222024-12%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYu%2C%20H.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F13%5C%2F12%5C%2F418%26%23039%3B%26gt%3BAn%20Assessment%20of%20the%20Map-Style%20Influence%20on%20Generalization%20with%20CycleGAN%3A%20Taking%20Line%20Features%20as%20an%20Example%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22An%20Assessment%20of%20the%20Map-Style%20Influence%20on%20Generalization%20with%20CycleGAN%3A%20Taking%20Line%20Features%20as%20an%20Example%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Heng%22%2C%22lastName%22%3A%22Yu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haoxuan%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ling%22%2C%22lastName%22%3A%22Zhang%22%7D%5D%2C%22abstractNote%22%3A%22As%20the%20complexity%20of%20GIS%20data%20continues%20to%20increase%2C%20there%20is%20a%20growing%20demand%20for%20automated%20map%20generalization.%20As%20end-to-end%20generative%20models%2C%20GAN%20models%20offer%20new%20solutions%20for%20automated%20map%20generalization.%20This%20study%20explores%20the%20impact%20of%20different%20map%20symbolization%20configurations%20on%20generative%20models%2C%20specifically%20using%20CycleGAN%20for%20line%20feature%20generalization.%20The%20quality%20of%20the%20generated%20results%20was%20assessed%20by%20constructing%20various%20symbolization%20datasets%20%28line%20width%2C%20type%2C%20and%20color%29%20and%20evaluating%20CycleGAN%5Cu2019s%20performance%20using%20metrics%20such%20as%20the%20MSE%2C%20SSIM%2C%20and%20PSNR.%20The%20results%20indicate%20that%20moderate%20line%20widths%20%280.5%5Cu20131%29%20yield%20better%20detail%20preservation%2C%20and%20different%20line%20types%20%28framed%20lines%20and%20dashed%20lines%29%20can%20highlight%20feature%20boundaries%20and%20enhance%20visual%20perception.%20By%20contrast%2C%20high-contrast%20color%20schemes%20enhance%20feature%20differentiation%20but%20increase%20pixel-level%20errors.%20This%20study%20concludes%20that%20generative%20models%20can%20maintain%20the%20geometric%20structure%20and%20spatial%20distribution%20of%20line%20features%2C%20but%20it%20is%20crucial%20to%20choose%20more%20suitable%20line%20features%20for%20different%20scenarios%20to%20meet%20detail%20requirements%2C%20ensuring%20high-quality%20outputs%20under%20diverse%20configurations.%22%2C%22date%22%3A%222024%5C%2F12%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi13120418%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F13%5C%2F12%5C%2F418%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-26T19%3A07%3A10Z%22%7D%7D%2C%7B%22key%22%3A%22NP58WEWR%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zheng%20et%20al.%22%2C%22parsedDate%22%3A%222024-09%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZheng%2C%20H.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F13%5C%2F9%5C%2F300%26%23039%3B%26gt%3BRoad%20Network%20Intelligent%20Selection%20Method%20Based%20on%20Heterogeneous%20Graph%20Attention%20Neural%20Network%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Road%20Network%20Intelligent%20Selection%20Method%20Based%20on%20Heterogeneous%20Graph%20Attention%20Neural%20Network%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haohua%22%2C%22lastName%22%3A%22Zheng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jianchen%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Heying%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guangxia%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jianzhong%22%2C%22lastName%22%3A%22Guo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiayao%22%2C%22lastName%22%3A%22Wang%22%7D%5D%2C%22abstractNote%22%3A%22Selecting%20road%20networks%20in%20cartographic%20generalization%20has%20consistently%20posed%20formidable%20challenges%2C%20driving%20research%20toward%20the%20application%20of%20intelligent%20models.%20Despite%20previous%20efforts%2C%20the%20accuracy%20and%20connectivity%20preservation%20in%20these%20studies%2C%20particularly%20when%20dealing%20with%20road%20types%20of%20similar%20sample%20sizes%2C%20still%20warrant%20improvement.%20To%20address%20these%20shortcomings%2C%20we%20introduce%20a%20Heterogeneous%20Graph%20Attention%20Network%20%28HAN%29%20for%20road%20selection%2C%20where%20the%20feature%20masking%20method%20is%20initially%20utilized%20to%20assess%20the%20significance%20of%20road%20features.%20Concentrating%20on%20the%20most%20relevant%20features%2C%20two%20meta-paths%20are%20introduced%20within%20the%20HAN%20framework%3A%20one%20for%20aggregating%20features%20of%20the%20same%20road%20type%20within%20the%20first-order%20neighborhood%2C%20emphasizing%20local%20connectivity%2C%20and%20another%20for%20extending%20this%20aggregation%20to%20the%20second-order%20neighborhood%2C%20capturing%20a%20broader%20spatial%20context.%20For%20a%20comprehensive%20evaluation%2C%20we%20use%20a%20set%20of%20metrics%20considering%20both%20quantitative%20and%20qualitative%20aspects%20of%20the%20road%20network.%20On%20road%20types%20with%20similar%20sample%20sizes%2C%20the%20HAN%20model%20outperforms%20other%20models%20in%20both%20transductive%20and%20inductive%20tasks.%20Its%20accuracy%20%28ACC%29%20is%20higher%20by%201.62%25%20and%200.67%25%2C%20and%20its%20F1-score%20is%20higher%20by%201.43%25%20and%200.81%25%2C%20respectively.%20Additionally%2C%20it%20enhances%20the%20overall%20connectivity%20of%20the%20selected%20network.%20In%20summary%2C%20our%20HAN-based%20method%20provides%20an%20advanced%20solution%20for%20road%20network%20selection%2C%20surpassing%20previous%20approaches%20in%20terms%20of%20accuracy%20and%20connectivity%20preservation.%22%2C%22date%22%3A%222024%5C%2F9%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi13090300%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F13%5C%2F9%5C%2F300%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-28T19%3A04%3A25Z%22%7D%7D%2C%7B%22key%22%3A%228H5ADJ79%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yan%20and%20Yang%22%2C%22parsedDate%22%3A%222024-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYan%2C%20X.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2218106%26%23039%3B%26gt%3BA%20deep%20learning%20approach%20for%20polyline%20and%20building%20simplification%20based%20on%20graph%20autoencoder%20with%20flexible%20constraints%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20deep%20learning%20approach%20for%20polyline%20and%20building%20simplification%20based%20on%20graph%20autoencoder%20with%20flexible%20constraints%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiongfeng%22%2C%22lastName%22%3A%22Yan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Yang%22%7D%5D%2C%22abstractNote%22%3A%22Polyline%20and%20building%20simplification%20remain%20challenging%20in%20cartography.%20Most%20proposed%20algorithms%20are%20geometric-based%20and%20rely%20on%20specific%20rules.%20In%20this%20study%2C%20we%20propose%20a%20deep%20learning%20approach%20to%20simplify%20polylines%20and%20buildings%20based%20on%20a%20graph%20autoencoder%20%28GAE%29.%20The%20model%20receives%20the%20coordinates%20of%20line%20vertices%20as%20inputs%20and%20obtains%20a%20simplified%20representation%20by%20reconstructing%20the%20original%20inputs%20with%20fewer%20vertices%20through%20pooling%2C%20in%20which%20the%20graph%20convolution%20based%20on%20graph%20Fourier%20transform%20is%20used%20for%20the%20layer-by-layer%20feature%20computation.%20By%20adjusting%20the%20loss%20functions%2C%20constraints%20such%20as%20area%20and%20shape%20preservation%20and%20angle-characteristic%20enhancement%20are%20flexibly%20configured%20under%20a%20unified%20learning%20framework.%20Our%20results%20confirmed%20the%20applicability%20of%20the%20GAE%20approach%20to%20the%20multi-scale%20simplification%20of%20land-cover%20boundaries%20and%20contours%20by%20adjusting%20the%20number%20of%20output%20nodes.%20Compared%20with%20existing%20Douglas%5Cu2012Peukcer%2C%20Fourier%20transform%2C%20and%20Delaunay%20triangulation%20approaches%2C%20the%20GAE%20approach%20was%20superior%20in%20achieving%20morphological%20abstraction%20while%20producing%20reasonably%20low%20position%2C%20area%2C%20and%20shape%20changes.%20Furthermore%2C%20we%20applied%20it%20to%20simplify%20buildings%20and%20demonstrated%20the%20potential%20for%20preserving%20the%20diversified%20characteristics%20of%20different%20types%20of%20lines.%22%2C%22date%22%3A%222024-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2023.2218106%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2218106%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-18T12%3A54%3A51Z%22%7D%7D%2C%7B%22key%22%3A%22ES4Y6VQD%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Courtial%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BCourtial%2C%20A.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2267419%26%23039%3B%26gt%3BDeepMapScaler%3A%20a%20workflow%20of%20deep%20neural%20networks%20for%20the%20generation%20of%20generalised%20maps%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22DeepMapScaler%3A%20a%20workflow%20of%20deep%20neural%20networks%20for%20the%20generation%20of%20generalised%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Azelle%22%2C%22lastName%22%3A%22Courtial%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guillaume%22%2C%22lastName%22%3A%22Touya%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiang%22%2C%22lastName%22%3A%22Zhang%22%7D%5D%2C%22abstractNote%22%3A%22The%20automation%20of%20map%20generalization%20has%20been%20an%20important%20research%20subject%20for%20decades%20but%20is%20not%20fully%20solved%20yet.%20Deep%20learning%20techniques%20are%20designed%20for%20various%20image%20generation%20tasks%2C%20so%20one%20may%20think%20that%20it%20would%20be%20possible%20to%20apply%20these%20techniques%20to%20cartography%20and%20train%20a%20holistic%20model%20for%20end-to-end%20map%20generalization.%20On%20the%20contrary%2C%20we%20assume%20that%20map%20generalization%20is%20a%20task%20too%20complex%20to%20be%20learnt%20with%20a%20unique%20model.%20Thus%2C%20in%20this%20article%2C%20we%20propose%20to%20resort%20to%20past%20research%20on%20map%20generalization%20and%20to%20separate%20map%20generalization%20into%20simpler%20sub-tasks%2C%20each%20of%20which%20can%20be%20more%20easily%20resolved%20by%20a%20deep%20neural%20network.%20Our%20main%20contribution%20is%20a%20workflow%20of%20deep%20models%2C%20called%20DeepMapScaler%2C%20which%20achieves%20a%20step-by-step%20topographic%20map%20generalization%20from%20detailed%20topographic%20data.%20First%2C%20we%20implement%20this%20workflow%20to%20generalize%20topographic%20maps%20containing%20roads%2C%20buildings%2C%20and%20rivers%20at%20a%20medium%20scale%20%281%3A50k%29%20from%20a%20detailed%20dataset.%20The%20results%20of%20each%20step%20are%20quantitatively%20and%20visually%20evaluated.%20Then%20the%20generalized%20images%20are%20compared%20with%20the%20generalization%20performed%20using%20a%20holistic%20model%20for%20an%20end-to-end%20map%20generalization%20and%20a%20traditional%20semi-automatic%20map%20generalization%20process.%20The%20experiment%20shows%20that%20the%20workflow%20approach%20is%20more%20promising%20than%20the%20holistic%20model%2C%20as%20each%20sub-task%20is%20specialized%20and%20fine-tuned%20accordingly.%20However%2C%20the%20results%20still%20do%20not%20reach%20the%20quality%20level%20of%20the%20semi-automatic%20traditional%20map%20generalization%20process%2C%20as%20some%20sub-tasks%20are%20more%20complex%20to%20handle%20with%20neural%20networks.%22%2C%22date%22%3A%222024-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2023.2267419%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2267419%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-18T12%3A53%3A12Z%22%7D%7D%2C%7B%22key%22%3A%22GHDL43H8%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Karsznia%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKarsznia%2C%20I.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2283075%26%23039%3B%26gt%3BUsing%20machine%20learning%20and%20data%20enrichment%20in%20the%20selection%20of%20roads%20for%20small-scale%20maps%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Using%20machine%20learning%20and%20data%20enrichment%20in%20the%20selection%20of%20roads%20for%20small-scale%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Izabela%22%2C%22lastName%22%3A%22Karsznia%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Albert%22%2C%22lastName%22%3A%22Adolf%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stefan%22%2C%22lastName%22%3A%22Leyk%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robert%22%2C%22lastName%22%3A%22Weibel%22%7D%5D%2C%22abstractNote%22%3A%22Making%20decisions%20about%20which%20objects%20to%20keep%20or%20omit%20is%20challenging%20in%20map%20design.%20This%20process%2C%20called%20selection%2C%20constitutes%20the%20first%20operation%20in%20cartographic%20generalization.%20In%20this%20research%2C%20a%20method%20of%20automatic%20road%20selection%20for%20creating%20small-scale%20maps%20using%20machine%20learning%20and%20data%20enrichment%20is%20proposed.%20First%2C%20the%20problem%20of%20contextual%20information%20scarcity%20concerning%20roads%20in%20the%20source%20database%20is%20addressed.%20Additional%20information%20concerning%20the%20relations%20between%20roads%20and%20other%20objects%20was%20added%20%28such%20as%20centrality%20and%20proximity%20measures%29.%20Second%2C%20machine%20learning%20is%20used%20to%20design%20automatic%20selection%20models%20based%20on%20enriched%20information.%20Third%2C%20three%20different%20road%20selection%20approaches%20are%20implemented.%20The%20baseline%20approach%20is%20following%20the%20official%20map%20design%20guidelines.%20The%20second%20approach%20is%20based%20on%20machine%20learning%20using%20the%20enriched%20road%20database.%20The%20third%20approach%20is%20based%20on%20an%20existing%20structural%20model.%20The%20results%20of%20all%20approaches%20are%20compared%20to%20existing%20atlas%20maps%20designed%20by%20experienced%20cartographers.%20The%20results%20of%20the%20Machine%20Learning%20Approaches%20were%20most%20similar%20to%20the%20atlas%20maps%20%28between%2081%25%20and%2090%25%20accuracy%29.%20The%20least%20efficient%20approaches%20were%20the%20Structural%20Approach%20with%2032%25%20and%20the%20Guidelines%20Approach%20with%2044%25%20accuracy.%20We%20conclude%20that%20enriching%20road%20data%20with%20new%20contextual%20information%20concerning%20roads%20and%20using%20machine%20learning%20is%20beneficial%20as%20the%20achieved%20results%20outperform%20both%20Guidelines%20and%20Structural%20Approaches.%22%2C%22date%22%3A%222024-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2023.2283075%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2283075%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-18T12%3A53%3A59Z%22%7D%7D%2C%7B%22key%22%3A%22849RMY2D%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Xiao%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BXiao%2C%20T.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10106049.2024.2413549%26%23039%3B%26gt%3BA%20road%20generalization%20method%20using%20graph%20convolutional%20network%20based%20on%20mesh-line%20structure%20unit%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20road%20generalization%20method%20using%20graph%20convolutional%20network%20based%20on%20mesh-line%20structure%20unit%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tianyuan%22%2C%22lastName%22%3A%22Xiao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tinghua%22%2C%22lastName%22%3A%22Ai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dirk%22%2C%22lastName%22%3A%22Burghardt%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pengcheng%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Aji%22%2C%22lastName%22%3A%22Gao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bo%22%2C%22lastName%22%3A%22Kong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Huafei%22%2C%22lastName%22%3A%22Yu%22%7D%5D%2C%22abstractNote%22%3A%22Road%20network%20simplification%20is%20a%20complex%20decision-making%20process.%20Such%20a%20multi-factor%20decision%20and%20scaling%20operation%20traditionally%20applied%20rule-based%20methods.%20The%20establishment%20and%20adjustment%20of%20these%20rules%20involve%20many%20human-set%20parameters%20and%20conditions%2C%20which%20makes%20generalized%20results%20closely%20related%20to%20the%20cartographer%5Cu2019s%20experience%20and%20habits.%20On%20the%20other%20hand%2C%20existing%20methods%20tend%20to%20consider%20individual%20structures%20separately%20in%20different%20algorithms%2C%20such%20as%20strokes%2C%20meshes%20and%20graph%20networks%2C%20lacking%20a%20solution%20that%20brings%20the%20advantages%20of%20these%20methods%20together.%20Aiming%20at%20the%20above%20problems%2C%20this%20study%20designs%20a%20simplification%20method%20using%20the%20Mesh-Line%20Structure%20Unit%20%28MLSU%29%20to%20consider%20polyline%20and%20polygon%20characteristics%20simultaneously%20with%20the%20support%20of%20graph-based%20deep%20learning%20networks.%20In%20order%20to%20make%20generalization%20decisions%2C%20a%20model%20based%20on%20graph%20convolutional%20network%20%28GCN%29%20is%20constructed%20and%20trained%20using%20real%20data%2C%20thus%20realizing%20the%20road%20network%20selective%20omission.%20The%20experimental%20results%20indicate%20that%20the%20proposed%20method%20effectively%20achieves%20automatic%20road%20generalization.%20The%20proposed%20method%20uses%20graph%20convolutional%20neural%20network%20techniques%20to%20construct%20a%20road%20generalization%20model%2C%20and%20can%20effectively%20combine%20the%20advantages%20of%20geographic%20domain%20knowledge%20with%20data-driven%20methods.A%20new%20specific%20MLSU%20structure%20is%20proposed%20for%20the%20road%20generalization%20tasks%2C%20which%20combines%20a%20road%20mesh%20with%20the%20road%20itself%2C%20enabling%20it%20to%20capture%20more%20road-related%20features%20and%20substitute%20the%20road%20in%20deep%20learning%20network%20model%20for%20training.The%20road%20generalization%20approach%20proposed%20in%20this%20paper%20comprehensively%20considers%20the%20roads%20themselves%2C%20the%20road%20network%2C%20and%20the%20neighbouring%20mesh%20polygons%2C%20thereby%20combining%20the%20advantages%20of%20traditional%20methods%20based%20on%20graph%20theory%2C%20strokes%20and%20mesh%20merging.%20The%20proposed%20method%20uses%20graph%20convolutional%20neural%20network%20techniques%20to%20construct%20a%20road%20generalization%20model%2C%20and%20can%20effectively%20combine%20the%20advantages%20of%20geographic%20domain%20knowledge%20with%20data-driven%20methods.%20A%20new%20specific%20MLSU%20structure%20is%20proposed%20for%20the%20road%20generalization%20tasks%2C%20which%20combines%20a%20road%20mesh%20with%20the%20road%20itself%2C%20enabling%20it%20to%20capture%20more%20road-related%20features%20and%20substitute%20the%20road%20in%20deep%20learning%20network%20model%20for%20training.%20The%20road%20generalization%20approach%20proposed%20in%20this%20paper%20comprehensively%20considers%20the%20roads%20themselves%2C%20the%20road%20network%2C%20and%20the%20neighbouring%20mesh%20polygons%2C%20thereby%20combining%20the%20advantages%20of%20traditional%20methods%20based%20on%20graph%20theory%2C%20strokes%20and%20mesh%20merging.%22%2C%22date%22%3A%222024-01-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F10106049.2024.2413549%22%2C%22ISSN%22%3A%221010-6049%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10106049.2024.2413549%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-12T22%3A38%3A11Z%22%7D%7D%2C%7B%22key%22%3A%22XF36NJTM%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Tang%20et%20al.%22%2C%22parsedDate%22%3A%222024%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BTang%2C%20J.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F13658816.2024.2387198%26%23039%3B%26gt%3BAutomatic%20road%20network%20selection%20method%20considering%20functional%20semantic%20features%20of%20roads%20with%20graph%20convolutional%20networks%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Automatic%20road%20network%20selection%20method%20considering%20functional%20semantic%20features%20of%20roads%20with%20graph%20convolutional%20networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jianbo%22%2C%22lastName%22%3A%22Tang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Deng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ju%22%2C%22lastName%22%3A%22Peng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Huimin%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xuexi%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xueying%22%2C%22lastName%22%3A%22Chen%22%7D%5D%2C%22abstractNote%22%3A%22Road%20network%20selection%20plays%20a%20key%20role%20in%20map%20generalization%20for%20creating%20multi-scale%20road%20network%20maps.%20Existing%20methods%20usually%20determine%20road%20importance%20based%20on%20road%20geometric%20and%20topological%20features%2C%20few%20evaluate%20road%20importance%20from%20the%20perspective%20of%20road%20utilization%20based%20on%20human%20travel%20data%2C%20ignoring%20the%20functional%20values%20of%20roads%2C%20which%20leads%20to%20a%20mismatch%20between%20the%20generated%20results%20and%20people%5Cu2019s%20needs.%20This%20paper%20develops%20two%20functional%20semantic%20features%20%28i.e.%20travel%20path%20selection%20probability%20and%20regional%20attractiveness%29%20to%20measure%20the%20functional%20importance%20of%20roads%20and%20proposes%20an%20automatic%20road%20network%20selection%20method%20based%20on%20graph%20convolutional%20networks%20%28GCN%29%2C%20which%20models%20road%20network%20selection%20as%20a%20binary%20classification.%20Firstly%2C%20we%20create%20a%20dual%20graph%20representing%20the%20source%20road%20network%20and%20extract%20road%20features%20including%20six%20graphical%20and%20two%20functional%20semantic%20features.%20Then%2C%20we%20develop%20an%20extended%20GCN%20model%20with%20connectivity%20loss%20for%20generating%20multi-scale%20road%20networks%20and%20propose%20a%20refinement%20strategy%20based%20on%20the%20road%20continuity%20principle%20to%20ensure%20road%20topology.%20Experiments%20demonstrate%20the%20proposed%20model%20with%20functional%20features%20improves%20the%20quality%20of%20selection%20results%2C%20particularly%20for%20large%20and%20medium%20scale%20maps.%20The%20proposed%20method%20outperforms%20state-of-the-art%20methods%20and%20provides%20a%20meaningful%20attempt%20for%20artificial%20intelligence%20models%20empowering%20cartography.%22%2C%22date%22%3A%2211%5C%2F2024%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2024.2387198%22%2C%22ISSN%22%3A%221365-8816%2C%201362-3087%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F13658816.2024.2387198%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-12T12%3A19%3A50Z%22%7D%7D%2C%7B%22key%22%3A%22N5XNL8F3%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wang%20and%20Qian%22%2C%22parsedDate%22%3A%222023-12-31%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWang%2C%20D.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10106049.2023.2252762%26%23039%3B%26gt%3BGraph%20neural%20network%20method%20for%20the%20intelligent%20selection%20of%20river%20system%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Graph%20neural%20network%20method%20for%20the%20intelligent%20selection%20of%20river%20system%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Di%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haizhong%22%2C%22lastName%22%3A%22Qian%22%7D%5D%2C%22abstractNote%22%3A%22The%20spatial%20features%20and%20generalisation%20rules%20for%20river%20network%20generalisation%20are%20difficult%20to%20directly%20quantify%20using%20indicators.%20To%20consider%20dimensional%20information%20hidden%20in%20river%20networks%20and%20improve%20river%20network%20selection%20accuracy%2C%20this%20study%20introduces%20a%20graph%20convolutional%20neural%20network-based%20method.%20First%2C%20we%20modelled%20the%20river%20network%20as%20a%20graph%20structure%2C%20where%20the%20nodes%20represent%20each%20river%20segment%20and%20edges%20represent%20the%20connections%20between%20river%20segments.%20The%20semantic%2C%20geometric%2C%20and%20morphological%20features%20of%20individual%20river%20segments%20and%20topological%20and%20constraint%20features%20between%20river%20segments%20were%20then%20calculated%20to%20characterise%20the%20relevant%20nodes.%20Second%2C%20under%20supervised%20classification%2C%20the%20input%20node%20attributes%20and%20labels%20were%20sampled%20and%20aggregated%20to%20obtain%20richer%20and%20more%20abstract%20high-level%20features.%20The%20graph%20convolutional%20neural%20network%20model%20then%20selected%20or%20deleted%20river%20segments.%20Finally%2C%20the%20selected%20individual%20river%20segments%20were%20connected%20to%20obtain%20the%20complete%20integrated%20river%20network.%20A%201%3A10%2C000%20scale%20map%20of%20the%20Min%20River%20system%20in%20the%20Yangtze%20River%20Basin%20was%20tested%2C%20with%20a%201%3A50%2C000%20scale%20map%20used%20as%20the%20control%2C%20and%20it%20yielded%20a%20correct%20classification%20rate%20%26gt%3B95%25.%20Moreover%2C%20the%20correct%20classification%20rate%20was%207.35%25%5Cu20135.31%25%20and%207.7%25%5Cu20133.3%25%20higher%20than%20that%20of%20other%20graph%20neural%20network%20methods%20and%20traditional%20machine%20learning%20methods%2C%20respectively.%22%2C%22date%22%3A%222023-12-31%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F10106049.2023.2252762%22%2C%22ISSN%22%3A%221010-6049%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10106049.2023.2252762%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-27T13%3A22%3A41Z%22%7D%7D%2C%7B%22key%22%3A%22IHRFZGC7%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Guo%20et%20al.%22%2C%22parsedDate%22%3A%222023-08%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGuo%2C%20X.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F12%5C%2F8%5C%2F336%26%23039%3B%26gt%3BA%20Method%20for%20Intelligent%20Road%20Network%20Selection%20Based%20on%20Graph%20Neural%20Network%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20Method%20for%20Intelligent%20Road%20Network%20Selection%20Based%20on%20Graph%20Neural%20Network%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xuan%22%2C%22lastName%22%3A%22Guo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Junnan%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fang%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haizhong%22%2C%22lastName%22%3A%22Qian%22%7D%5D%2C%22abstractNote%22%3A%22As%20an%20essential%20role%20in%20cartographic%20generalization%2C%20road%20network%20selection%20produces%20basic%20geographic%20information%20across%20map%20scales.%20However%2C%20the%20previous%20selection%20methods%20could%20not%20simultaneously%20consider%20both%20attribute%20characteristics%20and%20spatial%20structure.%20In%20light%20of%20this%2C%20an%20intelligent%20road%20network%20selection%20method%20based%20on%20a%20graph%20neural%20network%20%28GNN%29%20is%20proposed%20in%20this%20paper.%20Firstly%2C%20the%20selection%20case%20is%20designed%20to%20construct%20a%20sample%20library.%20Secondly%2C%20some%20neighbor%20sampling%20and%20aggregation%20rules%20are%20developed%20to%20update%20road%20features.%20Then%2C%20a%20GNN-based%20selection%20model%20is%20designed%20to%20calculate%20classification%20labels%2C%20thus%20completing%20road%20network%20selection.%20Finally%2C%20a%20few%20comparative%20analyses%20with%20different%20selection%20methods%20are%20conducted%2C%20verifying%20that%20most%20of%20the%20accuracy%20values%20of%20the%20GNN%20model%20are%20stable%20over%2090%25.%20The%20experiments%20indicate%20that%20this%20method%20could%20aggregate%20stroke%20nodes%20and%20their%20neighbors%20together%20to%20synchronously%20preserve%20semantic%2C%20geometric%2C%20and%20topological%20features%20of%20road%20strokes%2C%20and%20the%20selection%20result%20is%20closer%20to%20the%20reference%20map.%20Therefore%2C%20this%20paper%20could%20bridge%20the%20distance%20between%20deep%20learning%20and%20cartographic%20generalization%2C%20thus%20facilitating%20a%20more%20intelligent%20road%20network%20selection%20method.%22%2C%22date%22%3A%222023%5C%2F8%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi12080336%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F12%5C%2F8%5C%2F336%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-19T19%3A22%3A22Z%22%7D%7D%2C%7B%22key%22%3A%225P96AQ86%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Courtial%20et%20al.%22%2C%22parsedDate%22%3A%222023-03-04%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BCourtial%2C%20A.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2022.2123488%26%23039%3B%26gt%3BDeriving%20map%20images%20of%20generalised%20mountain%20roads%20with%20generative%20adversarial%20networks%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deriving%20map%20images%20of%20generalised%20mountain%20roads%20with%20generative%20adversarial%20networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Azelle%22%2C%22lastName%22%3A%22Courtial%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guillaume%22%2C%22lastName%22%3A%22Touya%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiang%22%2C%22lastName%22%3A%22Zhang%22%7D%5D%2C%22abstractNote%22%3A%22Map%20generalisation%20is%20a%20process%20that%20transforms%20geographic%20information%20for%20a%20cartographic%20at%20a%20specific%20scale.%20The%20goal%20is%20to%20produce%20legible%20and%20informative%20maps%20even%20at%20small%20scales%20from%20a%20detailed%20dataset.%20The%20potential%20of%20deep%20learning%20to%20help%20in%20this%20task%20is%20still%20unknown.%20This%20article%20examines%20the%20use%20case%20of%20mountain%20road%20generalisation%2C%20to%20explore%20the%20potential%20of%20a%20specific%20deep%20learning%20approach%3A%20generative%20adversarial%20networks%20%28GAN%29.%20Our%20goal%20is%20to%20generate%20images%20that%20depict%20road%20maps%20generalised%20at%20the%201%3A250k%20scale%2C%20from%20images%20that%20depict%20road%20maps%20of%20the%20same%20area%20using%20un-generalised%201%3A25k%20data.%20This%20paper%20not%20only%20shows%20the%20potential%20of%20deep%20learning%20to%20generate%20generalised%20mountain%20roads%2C%20but%20also%20analyses%20how%20the%20process%20of%20deep%20learning%20generalisation%20works%2C%20compares%20supervised%20and%20unsupervised%20learning%20and%20explores%20possible%20improvements.%20With%20this%20experiment%20we%20have%20exhibited%20an%20unsupervised%20model%20that%20is%20able%20to%20generate%20generalised%20maps%20evaluated%20as%20good%20as%20the%20reference%20and%20reviewed%20some%20possible%20improvements%20for%20deep%20learning-based%20generalisation%2C%20including%20training%20set%20management%20and%20the%20definition%20of%20a%20new%20road%20connectivity%20loss.%20All%20our%20results%20are%20evaluated%20visually%20using%20a%20four%20questions%20process%20and%20validated%20by%20a%20user%20test%20conducted%20on%20113%20individuals.%22%2C%22date%22%3A%222023-03-04%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2022.2123488%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2022.2123488%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-19T19%3A02%3A18Z%22%7D%7D%2C%7B%22key%22%3A%227BWBQC7T%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yu%20et%20al.%22%2C%22parsedDate%22%3A%222023%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYu%2C%20H.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1111%5C%2Ftgis.13104%26%23039%3B%26gt%3BIntegrating%20domain%20knowledge%20and%20graph%20convolutional%20neural%20networks%20to%20support%20river%20network%20selection%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Integrating%20domain%20knowledge%20and%20graph%20convolutional%20neural%20networks%20to%20support%20river%20network%20selection%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Huafei%22%2C%22lastName%22%3A%22Yu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tinghua%22%2C%22lastName%22%3A%22Ai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jingzhong%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lu%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Aji%22%2C%22lastName%22%3A%22Gao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tianyuan%22%2C%22lastName%22%3A%22Xiao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhe%22%2C%22lastName%22%3A%22Zhou%22%7D%5D%2C%22abstractNote%22%3A%22Deep%20learning%20is%20increasingly%20being%20used%20to%20improve%20the%20intelligence%20of%20map%20generalization.%20Vector-based%20map%20generalization%2C%20utilizing%20deep%20learning%2C%20is%20an%20important%20avenue%20for%20research.%20However%2C%20there%20are%20three%20questions%3A%20%281%29%20transforming%20vector%20data%20into%20a%20deep%20learning%20data%20paradigm%3B%20%282%29%20overcoming%20the%20limitation%20of%20the%20number%20of%20samples%3B%20and%20%283%29%20determining%20whether%20existing%20knowledge%20can%20accelerate%20deep%20learning.%20To%20address%20these%20questions%2C%20taking%20river%20network%20selection%20as%20an%20example%2C%20this%20study%20presents%20a%20framework%20integrating%20hydrological%20knowledge%20into%20graph%20convolutional%20neural%20networks%20%28GCNNs%29.%20This%20framework%20consists%20of%20the%20following%20steps%3A%20constructing%20a%20dual%20graph%20of%20river%20networks%20%28DG_RN%29%2C%20extracting%20domain%20knowledge%20as%20node%20attributes%20of%20DG_RN%2C%20developing%20an%20architecture%20of%20GCNNs%20for%20the%20selection%2C%20and%20designing%20a%20fine-tuning%20rule%20to%20refine%20the%20GCNN%20results.%20Experiments%20show%20that%20our%20framework%20outperforms%20existing%20machine%20learning%20and%20traditional%20feature%20sorting%20methods%20using%20different%20datasets%20and%20achieves%20good%20morphological%20consistency%20after%20the%20selection.%20Furthermore%2C%20these%20results%20indicate%20that%20DG_RN%20meets%20the%20data%20paradigm%20of%20graph%20deep%20learning%2C%20and%20the%20framework%20integrating%20existing%20characteristics%20%28i.e.%2C%20Strahler%20coding%2C%20the%20number%20of%20tributaries%2C%20the%20distance%20between%20proximity%20rivers%2C%20and%20upstream%20drainage%20area%29%20mitigates%20the%20dependence%20of%20GCNNs%20on%20plenty%20of%20samples%20and%20enhance%20its%20performance.%22%2C%22date%22%3A%222023%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1111%5C%2Ftgis.13104%22%2C%22ISSN%22%3A%221467-9671%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1111%5C%2Ftgis.13104%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-26T19%3A02%3A38Z%22%7D%7D%2C%7B%22key%22%3A%22XNCHT48J%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Du%20et%20al.%22%2C%22parsedDate%22%3A%222022-07-18%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BDu%2C%20J.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10106049.2021.1878288%26%23039%3B%26gt%3BSegmentation%20and%20sampling%20method%20for%20complex%20polyline%20generalization%20based%20on%20a%20generative%20adversarial%20network%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Segmentation%20and%20sampling%20method%20for%20complex%20polyline%20generalization%20based%20on%20a%20generative%20adversarial%20network%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiawei%22%2C%22lastName%22%3A%22Du%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fang%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ruixing%22%2C%22lastName%22%3A%22Xing%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xianyong%22%2C%22lastName%22%3A%22Gong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Linyi%22%2C%22lastName%22%3A%22Yu%22%7D%5D%2C%22abstractNote%22%3A%22This%20paper%20focuses%20on%20learning%20complex%20polyline%20generalization.%20First%2C%20the%20requirements%20for%20sampled%20images%20to%20ensure%20the%20effective%20learning%20of%20complex%20polyline%20generalization%20are%20analysed.%20To%20meet%20these%20requirements%2C%20new%20methods%20for%20segmenting%20complex%20polylines%20and%20sampling%20images%20are%20proposed.%20Second%2C%20using%20the%20proposed%20segmentation%20and%20sampling%20method%2C%20a%20use%20case%20for%20the%20learning%20of%20complex%20polyline%20generalization%20using%20the%20generative%20adversarial%20network%20model%2C%20Pix2Pix%2C%20is%20developed.%20Third%2C%20this%20use%20case%20is%20applied%20experimentally%20for%20the%20complex%20generalization%20of%20coastline%20data%20from%20a%20scale%20of%201%3A50%2C000%20to%201%3A250%2C000.%20Additionally%2C%20contrast%20experiments%20are%20conducted%20to%20compare%20the%20proposed%20segmentation%20and%20sampling%20method%20with%20object-based%20and%20traditional%20fixed-size%20methods.%20Experimental%20results%20show%20that%20the%20images%20generated%20using%20the%20proposed%20method%20are%20superior%20to%20the%20other%20two%20methods%20in%20the%20learning%20and%20application%20of%20complex%20polyline%20generalization.%20The%20results%20generalized%20for%20the%20developed%20use%20case%20are%20globally%20reasonable%20and%20suitably%20accurate.%22%2C%22date%22%3A%222022-07-18%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F10106049.2021.1878288%22%2C%22ISSN%22%3A%221010-6049%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10106049.2021.1878288%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A49%3A41Z%22%7D%7D%2C%7B%22key%22%3A%227YQM6DSM%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Du%20et%20al.%22%2C%22parsedDate%22%3A%222022-07-04%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BDu%2C%20J.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2021.2013944%26%23039%3B%26gt%3BPolyline%20simplification%20based%20on%20the%20artificial%20neural%20network%20with%20constraints%20of%20generalization%20knowledge%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Polyline%20simplification%20based%20on%20the%20artificial%20neural%20network%20with%20constraints%20of%20generalization%20knowledge%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiawei%22%2C%22lastName%22%3A%22Du%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fang%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jichong%22%2C%22lastName%22%3A%22Yin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chengyi%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xianyong%22%2C%22lastName%22%3A%22Gong%22%7D%5D%2C%22abstractNote%22%3A%22The%20present%20paper%20presents%20techniques%20for%20polyline%20simplification%20based%20on%20an%20artificial%20neural%20network%20within%20the%20constraints%20of%20generalization%20knowledge.%20The%20proposed%20method%20measures%20polyline%20shape%20characteristics%20that%20influence%20polyline%20simplification%20using%20abstracted%20descriptors%20and%20then%20introduces%20these%20descriptors%20into%20the%20artificial%20neural%20network%20as%20input%20properties.%20In%20total%2C%2018%20descriptors%20categorized%20into%20three%20types%20are%20presented%20in%20detail.%20In%20a%20second%20approach%2C%20map%20simplification%20principles%20are%20abstracted%20as%20controllers%2C%20imposed%20after%20the%20output%20layer%20of%20the%20trained%20artificial%20neural%20network%20to%20make%20the%20polyline%20simplification%20comply%20with%20these%20principles.%20This%20study%20worked%20with%20three%20controllers%20%5Cu2013%20a%20basic%20controller%20and%20two%20knowledge-based%20controllers.%20These%20descriptors%20and%20controllers%20abstracted%20from%20generalization%20knowledge%20were%20tested%20in%20experiments%20to%20determine%20their%20efficacy%20in%20polyline%20simplification%20based%20on%20the%20artificial%20neural%20network.%20The%20experimental%20results%20show%20that%20the%20utilization%20of%20abstracted%20descriptors%20and%20controllers%20can%20constrain%20the%20artificial%20neural%20network-based%20polyline%20simplification%20according%20to%20polyline%20shape%20characteristics%20and%20simplification%20principles.%22%2C%22date%22%3A%222022-07-04%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2021.2013944%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2021.2013944%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-19T19%3A23%3A35Z%22%7D%7D%2C%7B%22key%22%3A%22BAWBLWXJ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Courtial%20et%20al.%22%2C%22parsedDate%22%3A%222022-06-10%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BCourtial%2C%20A.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fagile-giss.copernicus.org%5C%2Farticles%5C%2F3%5C%2F32%5C%2F2022%5C%2F%26%23039%3B%26gt%3BRepresenting%20Vector%20Geographic%20Information%20As%20a%20Tensor%20for%20Deep%20Learning%20Based%20Map%20Generalisation%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Representing%20Vector%20Geographic%20Information%20As%20a%20Tensor%20for%20Deep%20Learning%20Based%20Map%20Generalisation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Azelle%22%2C%22lastName%22%3A%22Courtial%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guillaume%22%2C%22lastName%22%3A%22Touya%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiang%22%2C%22lastName%22%3A%22Zhang%22%7D%5D%2C%22abstractNote%22%3A%22Recently%2C%20many%20researchers%20tried%20to%20generate%20%28generalised%29%20maps%20using%20deep%20learning%2C%20and%20most%20of%20the%20proposed%20methods%20deal%20with%20deep%20neural%20network%20architecture%20choices.%20Deep%20learning%20learns%20to%20reproduce%20examples%2C%20so%20we%20think%20that%20improving%20the%20training%20examples%2C%20and%20especially%20the%20representation%20of%20the%20initial%20geographic%20information%2C%20is%20the%20key%20issue%20for%20this%20problem.%20Our%20article%20extracts%20some%20representation%20issues%20from%20a%20literature%20review%20and%20proposes%20different%20ways%20to%20represent%20vector%20geographic%20information%20as%20a%20tensor.We%20propose%20two%20kinds%20of%20contributions%3A%201%29%20the%20representation%20of%20information%20by%20layers%3B%202%29%20the%20representation%20of%20additional%20information.%20Then%2C%20we%20demonstrate%20the%20interest%20of%20some%20of%20our%20propositions%20with%20experiments%20that%20show%20a%20visual%20improvement%20for%20the%20generation%20of%20generalised%20topographic%20maps%20in%20urban%20areas.%22%2C%22date%22%3A%222022%5C%2F06%5C%2F10%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.5194%5C%2Fagile-giss-3-32-2022%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fagile-giss.copernicus.org%5C%2Farticles%5C%2F3%5C%2F32%5C%2F2022%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A48%3A06Z%22%7D%7D%2C%7B%22key%22%3A%223ZFU7TII%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Du%20and%20Wu%22%2C%22parsedDate%22%3A%222022-03-30%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BDu%20J.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Fxb.chinasmp.com%5C%2FCN%5C%2F10.11947%5C%2Fj.AGCS.2022.20210135%26%23039%3B%26gt%3BAn%20ensemble%20learning%20simplification%20approach%20based%20on%20multiple%20machine-learning%20algorithms%20with%20the%20fusion%20using%20of%20raster%20and%20vector%20data%20and%20a%20use%20case%20of%20coastline%20simplification%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22An%20ensemble%20learning%20simplification%20approach%20based%20on%20multiple%20machine-learning%20algorithms%20with%20the%20fusion%20using%20of%20raster%20and%20vector%20data%20and%20a%20use%20case%20of%20coastline%20simplification%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiawei%22%2C%22lastName%22%3A%22Du%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fang%22%2C%22lastName%22%3A%22Wu%22%7D%5D%2C%22abstractNote%22%3A%22To%20make%20use%20of%20accumulated%20simplification%20data%20and%20their%20contained%20simplification%20knowledge%20sufficiently%2C%20we%20propose%20an%20intelligent%20method%20based%20on%20the%20integration%20of%20several%20machine%20learning%20algorithms%2C%20which%20can%20use%20vector%20features%20and%20raster%20images%20to%20learn%20the%20vertex%20selection%20of%20polyline%20simplification%20in%20this%20paper.%20First%2C%20vertex%20selection%20models%20based%20on%20vector%20features%20and%20raster%20images%20are%20constructed%20by%20the%20fully%20connected%20neural%20network%20and%20the%20convolutional%20neural%20network%20respectively.%20Trained%20by%20corresponding%20samples%2C%20these%20two%20models%20can%20be%20utilized%20to%20generate%20vertex%20selection%20decisions%20via%20inputting%20vector%20features%20or%20raster%20images%20respectively.%20Second%2C%20some%20fusion%20models%20are%20constructed%20based%20on%20the%20linear%20weighting%20method%2C%20naive%20Bayes%20method%2C%20support%20vector%20machine%2C%20and%20artificial%20neural%20network%20to%20utilize%20outputs%20of%20vector-based%20and%20raster-based%20models%20to%20generate%20better%20decisions%20for%20vertex%20simplification.%20Finally%2C%20the%20proposed%20method%20applies%20into%20a%20use%20case.%20Experimental%20results%20show%20that%20the%20vector-based%20model%20and%20the%20raster-based%20model%20can%20learn%20and%20master%20vertex%20simplification%20in%20different%20levels%2C%20and%20fusion%20models%20can%20make%20complementary%20advantages%20of%20raster-based%20and%20vector-based%20models%20to%20improve%20the%20simplification%20accuracy%20further%2C%20and%20the%20best%20fusion%20model%20is%20better%20than%20some%20other%20simplification%20methods.%22%2C%22date%22%3A%222022-03-30%22%2C%22language%22%3A%22zh%22%2C%22DOI%22%3A%2210.11947%5C%2Fj.AGCS.2022.20210135%22%2C%22ISSN%22%3A%221001-1595%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fxb.chinasmp.com%5C%2FCN%5C%2F10.11947%5C%2Fj.AGCS.2022.20210135%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-07-31T16%3A23%3A42Z%22%7D%7D%2C%7B%22key%22%3A%228FF8ZEBQ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yu%20and%20Chen%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYu%2C%20W.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1111%5C%2Ftgis.12965%26%23039%3B%26gt%3BData-driven%20polyline%20simplification%20using%20a%20stacked%20autoencoder-based%20deep%20neural%20network%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Data-driven%20polyline%20simplification%20using%20a%20stacked%20autoencoder-based%20deep%20neural%20network%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenhao%22%2C%22lastName%22%3A%22Yu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yujie%22%2C%22lastName%22%3A%22Chen%22%7D%5D%2C%22abstractNote%22%3A%22Automatic%20simplification%20of%20polylines%20is%20an%20important%20issue%20in%20spatial%20database%20and%20mapping.%20Traditional%20rule-based%20methods%20are%20usually%20limited%20in%20performance%2C%20especially%20when%20the%20man-made%20rules%20have%20to%20be%20adapted%20to%20different%20polylines%20with%20different%20shapes%20and%20structures.%20Compared%20to%20the%20existing%20neural%20network%20methods%20focusing%20only%20on%20the%20output%20layer%20or%20the%20code%20layers%20for%20classification%20or%20regression%2C%20our%20proposed%20method%20generates%20multi-level%20abstractions%20of%20polylines%20by%20extracting%20features%20from%20multiple%20hidden%20layers.%20Specifically%2C%20we%20first%20organize%20the%20cartographic%20polylines%20into%20the%20form%20of%20feature%20vectors%20acceptable%20to%20the%20neural%20network%20model.%20Then%2C%20a%20stacked%20autoencoder-based%20deep%20neural%20network%20model%20is%20trained%20to%20learn%20the%20pattern%20features%20of%20polyline%20bends%20and%20omit%20unimportant%20details%20layer%20by%20layer.%20Finally%2C%20the%20multi-level%20abstractions%20of%20input%20polylines%20are%20generated%20from%20different%20hidden%20layers%20of%20a%20single%20model.%20The%20experimental%20results%20demonstrate%20that%2C%20compared%20with%20the%20classic%20Douglas%5Cu2013Peucker%20and%20Wang%20and%20Muller%20algorithms%2C%20the%20proposed%20method%20is%20able%20to%20properly%20simplify%20the%20polylines%20while%20representing%20their%20essential%20shapes%20smoothly%20and%20reducing%20areal%20displacement.%22%2C%22date%22%3A%222022%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1111%5C%2Ftgis.12965%22%2C%22ISSN%22%3A%221467-9671%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1111%5C%2Ftgis.12965%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A09%3A18Z%22%7D%7D%2C%7B%22key%22%3A%22HGZFPWUY%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zheng%20et%20al.%22%2C%22parsedDate%22%3A%222021-11%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZheng%2C%20J.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F10%5C%2F11%5C%2F768%26%23039%3B%26gt%3BDeep%20Graph%20Convolutional%20Networks%20for%20Accurate%20Automatic%20Road%20Network%20Selection%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep%20Graph%20Convolutional%20Networks%20for%20Accurate%20Automatic%20Road%20Network%20Selection%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jing%22%2C%22lastName%22%3A%22Zheng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ziren%22%2C%22lastName%22%3A%22Gao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jingsong%22%2C%22lastName%22%3A%22Ma%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jie%22%2C%22lastName%22%3A%22Shen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kang%22%2C%22lastName%22%3A%22Zhang%22%7D%5D%2C%22abstractNote%22%3A%22The%20selection%20of%20road%20networks%20is%20very%20important%20for%20cartographic%20generalization.%20Traditional%20artificial%20intelligence%20methods%20have%20improved%20selection%20efficiency%20but%20cannot%20fully%20extract%20the%20spatial%20features%20of%20road%20networks.%20However%2C%20current%20selection%20methods%2C%20which%20are%20based%20on%20the%20theory%20of%20graphs%20or%20strokes%2C%20have%20low%20automaticity%20and%20are%20highly%20subjective.%20Graph%20convolutional%20networks%20%28GCNs%29%20combine%20graph%20theory%20with%20neural%20networks%3B%20thus%2C%20they%20can%20not%20only%20extract%20spatial%20information%20but%20also%20realize%20automatic%20selection.%20Therefore%2C%20in%20this%20study%2C%20we%20adopted%20GCNs%20for%20automatic%20road%20network%20selection%20and%20transformed%20the%20process%20into%20one%20of%20node%20classification.%20In%20addition%2C%20to%20solve%20the%20problem%20of%20gradient%20vanishing%20in%20GCNs%2C%20we%20compared%20and%20analyzed%20the%20results%20of%20various%20GCNs%20%28GraphSAGE%20and%20graph%20attention%20networks%20%5BGAT%5D%29%20by%20selecting%20small-scale%20road%20networks%20under%20different%20deep%20architectures%20%28JK-Nets%2C%20ResNet%2C%20and%20DenseNet%29.%20Our%20results%20indicate%20that%20GAT%20provides%20better%20selection%20of%20road%20networks%20than%20other%20models.%20Additionally%2C%20the%20three%20abovementioned%20deep%20architectures%20can%20effectively%20improve%20the%20selection%20effect%20of%20models%3B%20JK-Nets%20demonstrated%20more%20improvement%20with%20higher%20accuracy%20%2888.12%25%29%20than%20other%20methods.%20Thus%2C%20our%20study%20shows%20that%20GCN%20is%20an%20appropriate%20tool%20for%20road%20network%20selection%3B%20its%20application%20in%20cartography%20must%20be%20further%20explored.%22%2C%22date%22%3A%222021%5C%2F11%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi10110768%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F10%5C%2F11%5C%2F768%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A56%3A16Z%22%7D%7D%2C%7B%22key%22%3A%228XFZHMEL%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Courtial%20et%20al.%22%2C%22parsedDate%22%3A%222021-06-30%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BCourtial%2C%20A.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.int-arch-photogramm-remote-sens-spatial-inf-sci.net%5C%2FXLIII-B4-2021%5C%2F15%5C%2F2021%5C%2F%26%23039%3B%26gt%3BGenerative%20adversarial%20networks%20to%20generalise%20urban%20areas%20in%20topographic%20maps%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Generative%20adversarial%20networks%20to%20generalise%20urban%20areas%20in%20topographic%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22A.%22%2C%22lastName%22%3A%22Courtial%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22G.%22%2C%22lastName%22%3A%22Touya%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22X.%22%2C%22lastName%22%3A%22Zhang%22%7D%5D%2C%22abstractNote%22%3A%22This%20article%20presents%20how%20a%20generative%20adversarial%20network%20%28GAN%29%20can%20be%20employed%20to%20produce%20a%20generalised%20map%20that%20combines%20several%20cartographic%20themes%20in%20the%20dense%20context%20of%20urban%20areas.%20We%20use%20as%20input%20detailed%20buildings%2C%20roads%2C%20and%20rivers%20from%20topographic%20datasets%20produced%20by%20the%20French%20national%20mapping%20agency%20%28IGN%29%2C%20and%20we%20expect%20as%20output%20of%20the%20GAN%20a%20legible%20map%20of%20these%20elements%20at%20a%20target%20scale%20of%201%3A50%2C000.%20This%20level%20of%20detail%20requires%20to%20reduce%20the%20amount%20of%20information%20while%20preserving%20patterns%3B%20covering%20dense%20inner%20cities%20block%20by%20a%20unique%20polygon%20is%20also%20necessary%20because%20these%20blocks%20cannot%20be%20represented%20with%20enlarged%20individual%20buildings.%20The%20target%20map%20has%20a%20style%20similar%20to%20the%20topographic%20map%20produced%20by%20IGN.%20This%20experiment%20succeeded%20in%20producing%20image%20tiles%20that%20look%20like%20legible%20maps.%20It%20also%20highlights%20the%20impact%20of%20data%20and%20representation%20choices%20on%20the%20quality%20of%20predicted%20images%2C%20and%20the%20challenge%20of%20learning%20geographic%20relationships.%22%2C%22date%22%3A%222021%5C%2F06%5C%2F30%22%2C%22proceedingsTitle%22%3A%22The%20International%20Archives%20of%20the%20Photogrammetry%2C%20Remote%20Sensing%20and%20Spatial%20Information%20Sciences%22%2C%22conferenceName%22%3A%22XXIV%20ISPRS%20Congress%20%3Cq%3EImaging%20today%2C%20foreseeing%20tomorrow%3C%5C%2Fq%3E%2C%20Commission%20IV%20-%202021%20edition%2C%205%26ndash%3B9%20July%202021%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.5194%5C%2Fisprs-archives-XLIII-B4-2021-15-2021%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.int-arch-photogramm-remote-sens-spatial-inf-sci.net%5C%2FXLIII-B4-2021%5C%2F15%5C%2F2021%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-07-31T16%3A22%3A23Z%22%7D%7D%2C%7B%22key%22%3A%22CFM27Y4P%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Courtial%20et%20al.%22%2C%22parsedDate%22%3A%222020-05%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BCourtial%2C%20A.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F9%5C%2F5%5C%2F338%26%23039%3B%26gt%3BExploring%20the%20Potential%20of%20Deep%20Learning%20Segmentation%20for%20Mountain%20Roads%20Generalisation%26lt%3B%5C%2Fa%26gt%3B.%202020%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Exploring%20the%20Potential%20of%20Deep%20Learning%20Segmentation%20for%20Mountain%20Roads%20Generalisation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Azelle%22%2C%22lastName%22%3A%22Courtial%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Achraf%22%2C%22lastName%22%3A%22El%20Ayedi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guillaume%22%2C%22lastName%22%3A%22Touya%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiang%22%2C%22lastName%22%3A%22Zhang%22%7D%5D%2C%22abstractNote%22%3A%22Among%20cartographic%20generalisation%20problems%2C%20the%20generalisation%20of%20sinuous%20bends%20in%20mountain%20roads%20has%20always%20been%20a%20popular%20one%20due%20to%20its%20difficulty.%20Recent%20research%20showed%20the%20potential%20of%20deep%20learning%20techniques%20to%20overcome%20some%20remaining%20research%20problems%20regarding%20the%20automation%20of%20cartographic%20generalisation.%20This%20paper%20explores%20this%20potential%20on%20the%20popular%20mountain%20road%20generalisation%20problem%2C%20which%20requires%20smoothing%20the%20road%2C%20enlarging%20the%20bend%20summits%2C%20and%20schematising%20the%20bend%20series%20by%20removing%20some%20of%20the%20bends.%20We%20modelled%20the%20mountain%20road%20generalisation%20as%20a%20deep%20learning%20problem%20by%20generating%20an%20image%20from%20input%20vector%20road%20data%2C%20and%20tried%20to%20generate%20it%20as%20an%20output%20of%20the%20model%20a%20new%20image%20of%20the%20generalised%20roads.%20Similarly%20to%20previous%20studies%20on%20building%20generalisation%2C%20we%20used%20a%20U-Net%20architecture%20to%20generate%20the%20generalised%20image%20from%20the%20ungeneralised%20image.%20The%20deep%20learning%20model%20was%20trained%20and%20evaluated%20on%20a%20dataset%20composed%20of%20roads%20in%20the%20Alps%20extracted%20from%20IGN%20%28the%20French%20national%20mapping%20agency%29%20maps%20at%201%3A250%2C000%20%28output%29%20and%201%3A25%2C000%20%28input%29%20scale.%20The%20results%20are%20encouraging%20as%20the%20output%20image%20looks%20like%20a%20generalised%20version%20of%20the%20roads%20and%20the%20accuracy%20of%20pixel%20segmentation%20is%20around%2065%25.%20The%20model%20learns%20how%20to%20smooth%20the%20output%20roads%2C%20and%20that%20it%20needs%20to%20displace%20and%20enlarge%20symbols%20but%20does%20not%20always%20correctly%20achieve%20these%20operations.%20This%20article%20shows%20the%20ability%20of%20deep%20learning%20to%20understand%20and%20manage%20the%20geographic%20information%20for%20generalisation%2C%20but%20also%20highlights%20challenges%20to%20come.%22%2C%22date%22%3A%222020%5C%2F5%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi9050338%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F9%5C%2F5%5C%2F338%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A02%3A26Z%22%7D%7D%5D%7D
Zheng, H. et al. Quantum neural network-based approach for optimizing road network selection. 2025
Xiao, T. et al. Map Generalization Method Supported by Graph Convolutional Networks. 2025
Shuaidong, J. et al. Cooperative generalization of isobaths and coastlines based on a DQN-driven simplification scale. 2025
Yan, X. et al. Deep learning in automatic map generalization: achievements and challenges. 2025
Courtial, A. et al. DeepMapScaler: a workflow of deep neural networks for the generation of generalised maps. 2024
Karsznia, I. et al. Using machine learning and data enrichment in the selection of roads for small-scale maps. 2024
Wang, D. et al. Graph neural network method for the intelligent selection of river system. 2023
Guo, X. et al. A Method for Intelligent Road Network Selection Based on Graph Neural Network. 2023
Courtial, A. et al. Deriving map images of generalised mountain roads with generative adversarial networks. 2023
Courtial, A. et al. Representing Vector Geographic Information As a Tensor for Deep Learning Based Map Generalisation. 2022
Zheng, J. et al. Deep Graph Convolutional Networks for Accurate Automatic Road Network Selection. 2021
Courtial, A. et al. Generative adversarial networks to generalise urban areas in topographic maps. 2021
Courtial, A. et al. Exploring the Potential of Deep Learning Segmentation for Mountain Roads Generalisation. 2020
Generalization (Polygons / Areas)
5447768
generalization, areas
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22P5C38LID%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Chen%20and%20Qian%22%2C%22parsedDate%22%3A%222025-12-31%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BChen%2C%20G.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10106049.2025.2471091%26%23039%3B%26gt%3BBuilding%20clustering%20method%20that%20integrates%20graph%20attention%20networks%20and%20spectral%20clustering%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Building%20clustering%20method%20that%20integrates%20graph%20attention%20networks%20and%20spectral%20clustering%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guoqing%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haizhong%22%2C%22lastName%22%3A%22Qian%22%7D%5D%2C%22abstractNote%22%3A%22Clustering%20is%20an%20unsupervised%20learning%20method%3B%20however%2C%20it%20cannot%20be%20directly%20applied%20to%20complex%20building%20features%20to%20reveal%20the%20deep-level%20intrinsic%20relationships%20between%20buildings%20and%20assimilate%20the%20cartographic%20experience%20of%20experts.%20Additionally%2C%20defining%20the%20number%20of%20clusters%20is%20a%20problem%20that%20remains%20to%20be%20resolved.%20Therefore%2C%20this%20study%20proposed%20a%20new%20building%20clustering%20method%20that%20integrates%20a%20graph%20attention%20network%20%28GAT%29%20and%20spectral%20clustering%20in%20two%20steps.%20First%2C%20supervised%20graph%20classification%20was%20performed.%20Under%20the%20constraints%20of%20road%20networks%2C%20buildings%20within%20the%20region%20enclosed%20by%20the%20road%20networks%20were%20constructed%20as%20multiple%20graph%20structures.%20The%20GAT%20was%20introduced%20to%20learn%20the%20cartographic%20experience%2C%20thereby%20completing%20the%20graph%20classification%20task.%20Second%2C%20the%20buildings%20are%20clustered%20using%20spectral%20clustering%20by%20constructing%20a%20new%20graph%20structure%20to%20more%20scientifically%20detect%20the%20spatial%20proximity%20of%20buildings.%20The%20result%20shows%20that%20the%20method%20proposed%20in%20this%20study%20is%20more%20in%20line%20with%20the%20real%20labels.%22%2C%22date%22%3A%222025-12-31%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F10106049.2025.2471091%22%2C%22ISSN%22%3A%221010-6049%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10106049.2025.2471091%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-27T13%3A13%3A02Z%22%7D%7D%2C%7B%22key%22%3A%222DDNSIFT%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wamhoff%20et%20al.%22%2C%22parsedDate%22%3A%222025-11-17%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWamhoff%2C%20M.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fica-proc.copernicus.org%5C%2Farticles%5C%2F7%5C%2F26%5C%2F2025%5C%2F%26%23039%3B%26gt%3BCNN-Based%20Geometric%20Feature%20Embedding%20Using%20Coordinates%20for%20Cartographic%20Generalization%20Tasks%20on%20Building%20Footprints%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22CNN-Based%20Geometric%20Feature%20Embedding%20Using%20Coordinates%20for%20Cartographic%20Generalization%20Tasks%20on%20Building%20Footprints%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Matthias%22%2C%22lastName%22%3A%22Wamhoff%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Julien%22%2C%22lastName%22%3A%22Baerenzung%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lilli%22%2C%22lastName%22%3A%22Kaufhold%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Martin%22%2C%22lastName%22%3A%22Kada%22%7D%5D%2C%22abstractNote%22%3A%22Cartographic%20Generalization%20is%20a%20fundamental%20process%20in%20automatic%20map%20generation%20that%20mostly%20relies%20on%20rule-based%20constraints.%20Previous%20methods%20that%20aim%20at%20learning%20this%20generalization%20process%20from%20data%20either%20feed%20rasterized%20grids%20into%20Convolutional%20Neural%20Networks%20%28CNN%29%20or%20compute%20hand-designed%20features%20to%20process%20vector%20data%20with%20Graph%20Neural%20Networks%20%28GNN%26rsquo%3Bs%29.%20However%2C%20there%20is%20little%20work%20that%20investigates%20the%20application%20of%20Deep%20Learning%20models%20directly%20on%20the%20vector%20coordinates.%20In%20this%20work%20we%20demonstrate%20the%20efficacy%20of%20CNN%20based%20architectures%20to%20highly%20constrained%20geometrical%20spaces%20such%20as%20building%20footprints%2C%20eliminating%20the%20need%20for%20more%20complex%20architectures.%20To%20remedy%20the%20lack%20of%20annotated%20data%20we%20propose%20to%20train%20geometrical%20feature%20embeddings%20in%20a%20self-supervised%20fashion%20to%20directly%20approximate%20geometrical%20properties%20of%20local%20triangles%20in%20the%20building%20footprints%2C%20rather%20than%20manually%20engineering%20Geometrical%20Features.%20We%20validate%20our%20approach%20on%20a%20Building%20Classification%20and%20cartographic%20generalisation%20task%2C%20outperforming%20previous%20methods.%22%2C%22date%22%3A%222025-11-17%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.5194%5C%2Fica-proc-7-26-2025%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fica-proc.copernicus.org%5C%2Farticles%5C%2F7%5C%2F26%5C%2F2025%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-20T16%3A14%3A41Z%22%7D%7D%2C%7B%22key%22%3A%22MVUF7IDI%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ding%20et%20al.%22%2C%22parsedDate%22%3A%222025-09-14%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BDing%2C%20C.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F24694452.2025.2511945%26%23039%3B%26gt%3BGeographical%20Scene%3A%20The%20Natural%20Unit%20for%20Geographical%20Analysis%20and%20Its%20Recognition%20Based%20on%20Data%20with%20Spatial%20and%20Semantic%20Features%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Geographical%20Scene%3A%20The%20Natural%20Unit%20for%20Geographical%20Analysis%20and%20Its%20Recognition%20Based%20on%20Data%20with%20Spatial%20and%20Semantic%20Features%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chen%22%2C%22lastName%22%3A%22Ding%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jianbo%22%2C%22lastName%22%3A%22Tang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhongan%22%2C%22lastName%22%3A%22Tang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Deng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenpei%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Huimin%22%2C%22lastName%22%3A%22Liu%22%7D%5D%2C%22abstractNote%22%3A%22Geographical%20analysis%20often%20faces%20challenges%20in%20selecting%20appropriate%20analysis%20units%20due%20to%20spatial%20heterogeneity%2C%20autocorrelation%2C%20and%20the%20modifiable%20areal%20unit%20problem%20%28MAUP%29.%20Traditional%20spatial%20partitioning%20or%20aggregation%20methods%20using%20grids%2C%20administrative%20zones%2C%20and%20traffic%20analysis%20zones%20rely%20heavily%20on%20spatial%20correlations%20while%20neglecting%20semantic%20relationships%20between%20geographical%20elements.%20This%20limitation%20hinders%20their%20ability%20to%20capture%20complex%20real-world%20patterns.%20To%20address%20this%2C%20we%20introduce%20a%20new%20framework%20that%20treats%20geographical%20scenes%20as%20fundamental%20analytical%20units%20and%20propose%20an%20automatic%20partitioning%20method%20for%20geographical%20scene%20identification%20by%20integrating%20both%20spatial%20proximity%20and%20multilevel%20semantic%20features%20of%20geographical%20entities.%20The%20method%20consists%20of%20five%20steps%2C%20namely%20measuring%20spatial%20relationships%20between%20entities%2C%20identifying%20representative%20entities%2C%20iteratively%20grouping%20units%20based%20on%20semantic%20similarity%2C%20applying%20topic%20modeling%20to%20infer%20scene%20categories%2C%20and%20optimizing%20geographical%20scene%20boundaries.%20Compared%20to%20conventional%20techniques%2C%20our%20framework%20improves%20semantic%20accuracy%20and%20result%20interpretability%20while%20effectively%20representing%20spatial%20and%20semantic%20diversity.%20The%20contributions%20of%20this%20article%20include%20a%20novel%20spatial-semantic%20integration%20method%20for%20geographical%20scene%20identification%2C%20uncovering%20hidden%20connections%20between%20geographical%20entities%2C%20and%20advancing%20the%20understanding%20of%20human%5Cu2013environment%20interactions.%20This%20study%20establishes%20a%20foundation%20for%20geographical%20scene%20analysis%20and%20offers%20practical%20insights%20for%20creating%20natural%20neighborhood%20units%20and%20reducing%20MAUP%20impacts%20in%20spatial%20studies.%22%2C%22date%22%3A%222025-09-14%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F24694452.2025.2511945%22%2C%22ISSN%22%3A%222469-4452%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F24694452.2025.2511945%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-27T13%3A16%3A54Z%22%7D%7D%2C%7B%22key%22%3A%22FEDW73JS%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Xiao%20et%20al.%22%2C%22parsedDate%22%3A%222025-06-09%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BXiao%2C%20T.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fagile-giss.copernicus.org%5C%2Farticles%5C%2F6%5C%2F13%5C%2F2025%5C%2F%26%23039%3B%26gt%3BMap%20Generalization%20Method%20Supported%20by%20Graph%20Convolutional%20Networks%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Map%20Generalization%20Method%20Supported%20by%20Graph%20Convolutional%20Networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tianyuan%22%2C%22lastName%22%3A%22Xiao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tinghua%22%2C%22lastName%22%3A%22Ai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dirk%22%2C%22lastName%22%3A%22Burghardt%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pengcheng%22%2C%22lastName%22%3A%22Liu%22%7D%5D%2C%22abstractNote%22%3A%22Map%20generalization%20has%20always%20been%20a%20key%20research%20issue%20in%20cartography.%20With%20the%20continuous%20development%20of%20the%20information%20age%2C%20massive%20amounts%20of%20map%20data%20are%20being%20generated%2C%20and%20how%20to%20effectively%20achieve%20multi-scale%20representation%20of%20large-volume%20vector%20data%20of%20various%20types%20has%20become%20a%20pressing%20challenge.%20Traditional%20methods%20of%20map%20generalization%2C%20which%20rely%20heavily%20on%20human-specified%20rules%20and%20set%20thresholds%2C%20tend%20to%20be%20complex%20and%20inefficient.%20Furthermore%2C%20they%20are%20often%20significantly%20influenced%20by%20the%20subjective%20factors%20of%20cartographers.%20To%20address%20these%20challenges%2C%20this%20study%20introduces%20graph-based%20deep%20learning%20techniques%20into%20the%20field%20of%20map%20generalization.%20Tailored%20generalization%20strategies%20were%20designed%20for%20point%20features%2C%20polyline%20features%2C%20and%20polygon%20features%2C%20enabling%20this%20data-driven%20approach%20to%20facilitate%20map%20generalization%20tasks%20from%20different%20perspectives.%20A%20comprehensive%20map%20generalization%20framework%20was%20developed%20for%20various%20feature%20types%20by%20integrating%20domain%20knowledge%20with%20data-driven%20techniques.%20This%20framework%20includes%20the%20construction%20of%20graph%20structures%20for%20different%20geographic%20objects%2C%20the%20extraction%20of%20feature%20vectors%2C%20and%20the%20design%20of%20deep%20learning%20network%20models.%20Experimental%20results%20demonstrate%20that%20the%20proposed%20method%20delivers%20good%20visual%20performance%20while%20preserving%20the%20various%20characteristics%20of%20the%20original%20map%20during%20the%20generalization%20process.%22%2C%22date%22%3A%222025-06-09%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.5194%5C%2Fagile-giss-6-13-2025%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fagile-giss.copernicus.org%5C%2Farticles%5C%2F6%5C%2F13%5C%2F2025%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-08-15T13%3A57%3A20Z%22%7D%7D%2C%7B%22key%22%3A%22CUVS899N%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yan%20et%20al.%22%2C%22parsedDate%22%3A%222025%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYan%2C%20X.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10095020.2025.2480815%26%23039%3B%26gt%3BDeep%20learning%20in%20automatic%20map%20generalization%3A%20achievements%20and%20challenges%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep%20learning%20in%20automatic%20map%20generalization%3A%20achievements%20and%20challenges%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiongfeng%22%2C%22lastName%22%3A%22Yan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tinghua%22%2C%22lastName%22%3A%22Ai%22%7D%5D%2C%22abstractNote%22%3A%22Map%20generalization%20has%20always%20been%20a%20hot%20topic%20in%20the%20field%20of%20Geographic%20Information%20Science%20%28GIS%29%20over%20the%20past%20decades.%20Scholars%20have%20been%20dedicated%20to%20utilizing%20opportunities%20offered%20by%20technological%20advancements%20to%20drive%20the%20rapid%20transformation%20of%20map%20generalization%20from%20manual%20to%20interactive%20modes%2C%20with%20an%20extension%20toward%20automatic%20mode.%20Deep%20Learning%20%28DL%29%2C%20known%20for%20powerful%20data-processing%20and%20pattern%20recognition%20capabilities%2C%20has%20introduced%20new%20possibilities%20for%20automatic%20map%20generalization.%20Novel%20studies%20eagerly%20adopt%20DL%20methods%20and%20explore%20their%20mechanisms%20to%20enhance%20the%20level%20of%20automation%20of%20map%20generalization.%20However%2C%20current%20research%20on%20this%20topic%20remains%20relatively%20scattered%20and%20thus%20a%20systematic%20summary%20and%20in-depth%20analysis%20are%20required.%20This%20study%20presents%20an%20overview%20of%20the%20achievements%20in%20addressing%20map%20generalization%20task%20using%20DL%2C%20with%20emphasis%20on%20the%20progress%20in%20the%20past%20five%20years%2C%20covering%20the%20aspects%20of%20pattern%20recognition%2C%20algorithm%20design%2C%20process%20control%2C%20and%20result%20evaluation.%20Furthermore%2C%20we%20examined%20the%20latest%20development%20trends%20of%20advanced%20DL%20methods%2C%20specifically%20large%20models%20%28LMs%29%2C%20in%20the%20context%20of%20map%20generalization%20and%20identified%20potential%20future%20research%20directions.%20We%20anticipate%20that%20this%20work%20will%20catalyze%20a%20new%20wave%20of%20technological%20advancements%20in%20the%20field%20of%20automatic%20map%20generalization.%22%2C%22date%22%3A%222025%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F10095020.2025.2480815%22%2C%22ISSN%22%3A%221009-5020%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10095020.2025.2480815%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-12T22%3A28%3A37Z%22%7D%7D%2C%7B%22key%22%3A%229UR2L4ZE%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhou%20et%20al.%22%2C%22parsedDate%22%3A%222024-12-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZhou%2C%20Z.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1569843224005922%26%23039%3B%26gt%3BSpaGAN%3A%20A%20spatially-aware%20generative%20adversarial%20network%20for%20building%20generalization%20in%20image%20maps%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22SpaGAN%3A%20A%20spatially-aware%20generative%20adversarial%20network%20for%20building%20generalization%20in%20image%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhiyong%22%2C%22lastName%22%3A%22Zhou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cheng%22%2C%22lastName%22%3A%22Fu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robert%22%2C%22lastName%22%3A%22Weibel%22%7D%5D%2C%22abstractNote%22%3A%22Building%20generalization%20is%20an%20essential%20task%20in%20generating%20multi-scale%20topographic%20maps.%20The%20progress%20of%20deep%20learning%20offers%20a%20new%20paradigm%20to%20overcome%20the%20coordination%20challenges%20faced%20by%20conventional%20building%20generalization%20algorithms.%20Some%20studies%20have%20confirmed%20the%20feasibility%20of%20several%20original%20semantic%20segmentation%20networks%2C%20such%20as%20U-Net%20and%20its%20variants%20and%20the%20conditional%20generative%20adversarial%20network%20%28cGAN%29%2C%20for%20building%20generalization%20in%20image%20maps.%20However%2C%20they%20suffer%20from%20critical%20deformation%20effects%2C%20especially%20for%20large%20and%20geometrically%20complex%20buildings.%20Since%20learning%20building%20generalization%20essentially%20means%20modeling%20the%20subtle%20transformation%20of%20building%20footprints%20across%20scales%2C%20we%20argue%20that%20the%20spatial%20awareness%20of%20a%20neural%20network%2C%20for%20instance%2C%20regarding%20building%20size%20and%20shape%2C%20is%20crucial%20to%20effective%20learning.%20Thus%2C%20we%20propose%20a%20spatially-aware%20generative%20adversarial%20network%2C%20SpaGAN.%20It%20takes%20a%20representative%20cGAN%2C%20pix2pix%2C%20as%20the%20backbone%2C%20and%20modifies%20two%20modules%3A%20In%20the%20U-Net-based%20generator%2C%20an%20atrous%20spatial%20pyramid%20pooling%20%28ASPP%29%20module%20replaces%20the%20conventional%20convolutional%20module%20to%20extract%20multi-scale%20features%20of%20buildings%20of%20varying%20sizes%20and%20shapes%3B%20in%20the%20PatchGAN-based%20discriminator%2C%20a%20signed%20distance%20map%20%28SDM%29%20module%20is%20used%20to%20capture%20the%20fine-grained%20shape%20difference%20for%20discrimination.%20The%20proposed%20network%20was%20comprehensively%20evaluated%20with%20a%20synthetic%20and%20a%20real-world%20dataset.%20The%20results%20demonstrate%20that%20SpaGAN%20outperforms%20existing%20baseline%20models%20%28U-Net%2C%20ResU-Net%2C%20pix2pix%29%20for%20building%20generalization%2C%20particularly%20in%20the%20real-world%20dataset.%20The%20new%20model%20can%20achieve%20more%20reasonable%20aggregation%2C%20simplification%2C%20and%20squaring%20generalization%20operators.%22%2C%22date%22%3A%222024-12-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.jag.2024.104236%22%2C%22ISSN%22%3A%221569-8432%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1569843224005922%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-12-12T16%3A34%3A13Z%22%7D%7D%2C%7B%22key%22%3A%22YSSPMHR5%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Fu%20et%20al.%22%2C%22parsedDate%22%3A%222024-10-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BFu%2C%20C.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2024.2369535%26%23039%3B%26gt%3BReasoning%20cartographic%20knowledge%20in%20deep%20learning-based%20map%20generalization%20with%20explainable%20AI%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Reasoning%20cartographic%20knowledge%20in%20deep%20learning-based%20map%20generalization%20with%20explainable%20AI%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cheng%22%2C%22lastName%22%3A%22Fu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhiyong%22%2C%22lastName%22%3A%22Zhou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yanan%22%2C%22lastName%22%3A%22Xin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robert%22%2C%22lastName%22%3A%22Weibel%22%7D%5D%2C%22abstractNote%22%3A%22Cartographic%20map%20generalization%20involves%20complex%20rules%2C%20and%20a%20full%20automation%20has%20still%20not%20been%20achieved%2C%20despite%20many%20efforts%20over%20the%20past%20few%20decades.%20Pioneering%20studies%20show%20that%20some%20map%20generalization%20tasks%20can%20be%20partially%20automated%20by%20deep%20neural%20networks%20%28DNNs%29.%20However%2C%20DNNs%20are%20still%20used%20as%20black-box%20models%20in%20previous%20studies.%20We%20argue%20that%20integrating%20explainable%20AI%20%28XAI%29%20into%20a%20DL-based%20map%20generalization%20process%20can%20give%20more%20insights%20to%20develop%20and%20refine%20the%20DNNs%20by%20understanding%20what%20cartographic%20knowledge%20exactly%20is%20learned.%20Following%20an%20XAI%20framework%20for%20an%20empirical%20case%20study%2C%20visual%20analytics%20and%20quantitative%20experiments%20were%20applied%20to%20explain%20the%20importance%20of%20input%20features%20regarding%20the%20prediction%20of%20a%20pre-trained%20ResU-Net%20model.%20This%20experimental%20case%20study%20finds%20that%20the%20XAI-based%20visualization%20results%20can%20easily%20be%20interpreted%20by%20human%20experts.%20With%20the%20proposed%20XAI%20workflow%2C%20we%20further%20find%20that%20the%20DNN%20pays%20more%20attention%20to%20the%20building%20boundaries%20than%20the%20interior%20parts%20of%20the%20buildings.%20We%20thus%20suggest%20that%20boundary%20intersection%20over%20union%20is%20a%20better%20evaluation%20metric%20than%20commonly%20used%20intersection%20over%20union%20in%20qualifying%20raster-based%20map%20generalization%20results.%20Overall%2C%20this%20study%20shows%20the%20necessity%20and%20feasibility%20of%20integrating%20XAI%20as%20part%20of%20future%20DL-based%20map%20generalization%20development%20frameworks.%22%2C%22date%22%3A%222024-10-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2024.2369535%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2024.2369535%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-07-31T18%3A39%3A14Z%22%7D%7D%2C%7B%22key%22%3A%22E34TFMUY%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Knura%22%2C%22parsedDate%22%3A%222024-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKnura%2C%20M.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2273397%26%23039%3B%26gt%3BLearning%20from%20vector%20data%3A%20enhancing%20vector-based%20shape%20encoding%20and%20shape%20classification%20for%20map%20generalization%20purposes%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Learning%20from%20vector%20data%3A%20enhancing%20vector-based%20shape%20encoding%20and%20shape%20classification%20for%20map%20generalization%20purposes%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Martin%22%2C%22lastName%22%3A%22Knura%22%7D%5D%2C%22abstractNote%22%3A%22Map%20generalization%20is%20a%20complex%20task%20that%20requires%20a%20high%20level%20of%20spatial%20cognition%2C%20and%20deep%20learning%20techniques%20have%20shown%20in%20numerous%20research%20fields%20that%20they%20could%20match%20or%20even%20outplay%20human%20cognition%20when%20knowledge%20is%20implicitly%20in%20the%20data.%20First%20experiments%20that%20apply%20deep%20learning%20techniques%20to%20map%20generalization%20tasks%20thereby%20adapt%20models%20from%20image%20processing%2C%20creating%20input%20data%20by%20rasterizing%20spatial%20vector%20data.%20Because%20image-based%20learning%20has%20major%20shortcomings%20for%20map%20generalization%2C%20this%20article%20investigates%20possibilities%20to%20learn%20directly%20from%20vector%20data%2C%20utilizing%20vector-based%20encoding%20schemes.%20First%2C%20we%20enhance%20preprocessing%20methods%20to%20match%20essential%20properties%20of%20deep%20learning%20models%20%5Cu2013%20namely%20regularity%20and%20feature%20description%20%5Cu2013%20and%20evaluate%20the%20performance%20of%20Convolutional%20Neural%20Networks%20%28CNN%29%2C%20Recurrent%20Neural%20Networks%20%28RNN%29%2C%20and%20Graph%20Convolutional%20Neural%20Networks%20%28GCNN%29%20in%20combination%20with%20a%20feature-based%20encoding%20scheme.%20The%20results%20show%20that%20feature%20descriptors%20improve%20the%20accuracy%20of%20all%20three%20neural%20networks%2C%20and%20that%20the%20overall%20performances%20of%20the%20models%20are%20quite%20similar%20for%20both%20polygon%20and%20polyline%20shape%20classification%20tasks.%20In%20a%20second%20step%2C%20we%20implement%20an%20exemplary%20building%20generalization%20workflow%20based%20on%20shape%20classification%20and%20template%20matching%2C%20and%20discuss%20the%20generalization%20results%20based%20on%20a%20case%20study.%22%2C%22date%22%3A%222024-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2023.2273397%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2273397%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-18T17%3A37%3A51Z%22%7D%7D%2C%7B%22key%22%3A%225Q3C888Q%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Fu%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BFu%2C%20C.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2264757%26%23039%3B%26gt%3BKeeping%20walls%20straight%3A%20data%20model%20and%20training%20set%20size%20matter%20for%20deep%20learning%20in%20building%20generalization%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Keeping%20walls%20straight%3A%20data%20model%20and%20training%20set%20size%20matter%20for%20deep%20learning%20in%20building%20generalization%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cheng%22%2C%22lastName%22%3A%22Fu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhiyong%22%2C%22lastName%22%3A%22Zhou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yu%22%2C%22lastName%22%3A%22Feng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robert%22%2C%22lastName%22%3A%22Weibel%22%7D%5D%2C%22abstractNote%22%3A%22Deep%20learning-backed%20models%20have%20shown%20their%20potential%20of%20conducting%20map%20generalization%20tasks.%20However%2C%20pioneering%20studies%20for%20raster-based%20building%20generalization%20encountered%20a%20common%20%5Cu201cwabbly-wall%20effect%5Cu201d%20that%20makes%20the%20predicted%20building%20shapes%20unrealistic.%20This%20effect%20was%20identified%20as%20a%20critical%20challenge%20in%20the%20existing%20studies.%20This%20work%20proposes%20a%20layered%20data%20representation%20model%20that%20separately%20stores%20a%20building%20for%20generalization%20and%20its%20context%20buildings%20in%20different%20channels.%20Incorporating%20adjustments%20to%20training%20sample%20generation%20and%20prediction%20tasks%2C%20we%20show%20how%20even%20without%20using%20more%20complex%20deep%20learning%20architectures%2C%20the%20widely%20used%20Residual%20U-Net%20can%20already%20produce%20straight%20walls%20for%20the%20generalized%20buildings%20and%20maintains%20rectangularity%20and%20parallelism%20of%20the%20buildings%20very%20well%20for%20building%20simplification%20and%20aggregation%20in%20the%20scale%20transition%20from%201%3A5%2C000%20to%201%3A10%2C000%20and%201%3A5%2C000%20to%201%3A15%2C000%2C%20respectively.%20Experiments%20with%20visual%20evaluation%20and%20quantitative%20indicators%20such%20as%20Intersection%20over%20Union%20%28IoU%29%2C%20fractality%2C%20and%20roughness%20index%20show%20that%20using%20a%20larger%20input%20tensor%20size%20is%20an%20easy%20but%20effective%20solution%20to%20improve%20prediction.%20Balancing%20samples%20with%20data%20augmentation%20and%20introducing%20an%20attention%20mechanism%20to%20increase%20network%20learning%20capacity%20can%20help%20in%20certain%20experiment%20settings%20but%20have%20obvious%20tradeoffs.%20In%20addition%2C%20we%20find%20that%20the%20defects%20observed%20in%20previous%20studies%20may%20be%20due%20to%20a%20lack%20of%20enough%20training%20samples.%20We%20thus%20conclude%20that%20the%20wabbly-wall%20challenge%20can%20be%20solved%2C%20paving%20the%20way%20for%20further%20studies%20of%20applying%20raster-based%20deep%20learning%20models%20on%20map%20generalization.%20Demonstrates%20the%20effectiveness%20of%20the%20proposed%20data%20structure%20with%20multiple%20evaluation%20indicatorsIdentifies%20a%20%5Cu201cwabbly-wall%20effect%5Cu201d%20a%20challenge%20in%20deep-learning%20backed%20image%20based%20map%20generalizationProposes%20a%20layered%20data%20structure%20that%20separates%20a%20target%20building%20and%20its%20surrounding%20buildings%20to%20ease%20the%20learning%20task%20in%20training%20deep%20learning%20models%20for%20raster-based%20map%20generalization.%20Demonstrates%20the%20effectiveness%20of%20the%20proposed%20data%20structure%20with%20multiple%20evaluation%20indicators%20Identifies%20a%20%5Cu201cwabbly-wall%20effect%5Cu201d%20a%20challenge%20in%20deep-learning%20backed%20image%20based%20map%20generalization%20Proposes%20a%20layered%20data%20structure%20that%20separates%20a%20target%20building%20and%20its%20surrounding%20buildings%20to%20ease%20the%20learning%20task%20in%20training%20deep%20learning%20models%20for%20raster-based%20map%20generalization.%22%2C%22date%22%3A%222024-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2023.2264757%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2264757%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-18T17%3A37%3A45Z%22%7D%7D%2C%7B%22key%22%3A%228H5ADJ79%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yan%20and%20Yang%22%2C%22parsedDate%22%3A%222024-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYan%2C%20X.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2218106%26%23039%3B%26gt%3BA%20deep%20learning%20approach%20for%20polyline%20and%20building%20simplification%20based%20on%20graph%20autoencoder%20with%20flexible%20constraints%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20deep%20learning%20approach%20for%20polyline%20and%20building%20simplification%20based%20on%20graph%20autoencoder%20with%20flexible%20constraints%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiongfeng%22%2C%22lastName%22%3A%22Yan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Yang%22%7D%5D%2C%22abstractNote%22%3A%22Polyline%20and%20building%20simplification%20remain%20challenging%20in%20cartography.%20Most%20proposed%20algorithms%20are%20geometric-based%20and%20rely%20on%20specific%20rules.%20In%20this%20study%2C%20we%20propose%20a%20deep%20learning%20approach%20to%20simplify%20polylines%20and%20buildings%20based%20on%20a%20graph%20autoencoder%20%28GAE%29.%20The%20model%20receives%20the%20coordinates%20of%20line%20vertices%20as%20inputs%20and%20obtains%20a%20simplified%20representation%20by%20reconstructing%20the%20original%20inputs%20with%20fewer%20vertices%20through%20pooling%2C%20in%20which%20the%20graph%20convolution%20based%20on%20graph%20Fourier%20transform%20is%20used%20for%20the%20layer-by-layer%20feature%20computation.%20By%20adjusting%20the%20loss%20functions%2C%20constraints%20such%20as%20area%20and%20shape%20preservation%20and%20angle-characteristic%20enhancement%20are%20flexibly%20configured%20under%20a%20unified%20learning%20framework.%20Our%20results%20confirmed%20the%20applicability%20of%20the%20GAE%20approach%20to%20the%20multi-scale%20simplification%20of%20land-cover%20boundaries%20and%20contours%20by%20adjusting%20the%20number%20of%20output%20nodes.%20Compared%20with%20existing%20Douglas%5Cu2012Peukcer%2C%20Fourier%20transform%2C%20and%20Delaunay%20triangulation%20approaches%2C%20the%20GAE%20approach%20was%20superior%20in%20achieving%20morphological%20abstraction%20while%20producing%20reasonably%20low%20position%2C%20area%2C%20and%20shape%20changes.%20Furthermore%2C%20we%20applied%20it%20to%20simplify%20buildings%20and%20demonstrated%20the%20potential%20for%20preserving%20the%20diversified%20characteristics%20of%20different%20types%20of%20lines.%22%2C%22date%22%3A%222024-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2023.2218106%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2218106%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-18T12%3A54%3A51Z%22%7D%7D%2C%7B%22key%22%3A%22ES4Y6VQD%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Courtial%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BCourtial%2C%20A.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2267419%26%23039%3B%26gt%3BDeepMapScaler%3A%20a%20workflow%20of%20deep%20neural%20networks%20for%20the%20generation%20of%20generalised%20maps%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22DeepMapScaler%3A%20a%20workflow%20of%20deep%20neural%20networks%20for%20the%20generation%20of%20generalised%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Azelle%22%2C%22lastName%22%3A%22Courtial%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guillaume%22%2C%22lastName%22%3A%22Touya%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiang%22%2C%22lastName%22%3A%22Zhang%22%7D%5D%2C%22abstractNote%22%3A%22The%20automation%20of%20map%20generalization%20has%20been%20an%20important%20research%20subject%20for%20decades%20but%20is%20not%20fully%20solved%20yet.%20Deep%20learning%20techniques%20are%20designed%20for%20various%20image%20generation%20tasks%2C%20so%20one%20may%20think%20that%20it%20would%20be%20possible%20to%20apply%20these%20techniques%20to%20cartography%20and%20train%20a%20holistic%20model%20for%20end-to-end%20map%20generalization.%20On%20the%20contrary%2C%20we%20assume%20that%20map%20generalization%20is%20a%20task%20too%20complex%20to%20be%20learnt%20with%20a%20unique%20model.%20Thus%2C%20in%20this%20article%2C%20we%20propose%20to%20resort%20to%20past%20research%20on%20map%20generalization%20and%20to%20separate%20map%20generalization%20into%20simpler%20sub-tasks%2C%20each%20of%20which%20can%20be%20more%20easily%20resolved%20by%20a%20deep%20neural%20network.%20Our%20main%20contribution%20is%20a%20workflow%20of%20deep%20models%2C%20called%20DeepMapScaler%2C%20which%20achieves%20a%20step-by-step%20topographic%20map%20generalization%20from%20detailed%20topographic%20data.%20First%2C%20we%20implement%20this%20workflow%20to%20generalize%20topographic%20maps%20containing%20roads%2C%20buildings%2C%20and%20rivers%20at%20a%20medium%20scale%20%281%3A50k%29%20from%20a%20detailed%20dataset.%20The%20results%20of%20each%20step%20are%20quantitatively%20and%20visually%20evaluated.%20Then%20the%20generalized%20images%20are%20compared%20with%20the%20generalization%20performed%20using%20a%20holistic%20model%20for%20an%20end-to-end%20map%20generalization%20and%20a%20traditional%20semi-automatic%20map%20generalization%20process.%20The%20experiment%20shows%20that%20the%20workflow%20approach%20is%20more%20promising%20than%20the%20holistic%20model%2C%20as%20each%20sub-task%20is%20specialized%20and%20fine-tuned%20accordingly.%20However%2C%20the%20results%20still%20do%20not%20reach%20the%20quality%20level%20of%20the%20semi-automatic%20traditional%20map%20generalization%20process%2C%20as%20some%20sub-tasks%20are%20more%20complex%20to%20handle%20with%20neural%20networks.%22%2C%22date%22%3A%222024-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2023.2267419%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2267419%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-18T12%3A53%3A12Z%22%7D%7D%2C%7B%22key%22%3A%22U7DDTVCL%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Niu%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BNiu%2C%20X.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10106049.2024.2306265%26%23039%3B%26gt%3BDetermining%20the%20optimal%20generalization%20operators%20for%20building%20footprints%20using%20an%20improved%20graph%20neural%20network%20model%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Determining%20the%20optimal%20generalization%20operators%20for%20building%20footprints%20using%20an%20improved%20graph%20neural%20network%20model%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xinyu%22%2C%22lastName%22%3A%22Niu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haizhong%22%2C%22lastName%22%3A%22Qian%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiao%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Limin%22%2C%22lastName%22%3A%22Xie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Longfei%22%2C%22lastName%22%3A%22Cui%22%7D%5D%2C%22abstractNote%22%3A%22Determining%20the%20optimal%20generalization%20operators%20of%20city%20buildings%20is%20a%20crucial%20step%20during%20the%20building%20generalization%20process%20and%20an%20important%20aspect%20of%20realizing%20cross-scale%20updating%20of%20map%20data.%20It%20is%20a%20decision-making%20behavior%20of%20the%20cartographer%20that%20can%20be%20learned%20and%20simulated%20using%20artificial%20intelligence%20algorithms.%20Multi-scale%20data%20can%20provide%20rich%20generalization%20samples%20to%20train%20the%20determination%20process.%20However%2C%20previous%20studies%20have%20focused%20primarily%20on%20the%20intelligent%20use%20of%20each%20generalization%20operator%20separately%2C%20neglecting%20the%20intelligent%20scheduling%20issue%20between%20multiple%20operators%20when%20they%20are%20used%20simultaneously.%20Herein%2C%20we%20propose%20an%20improved%20graph%20neural%20network%20%28GNN%29%20called%20self-neighborhood%20merged%20GNN%20%28SNGNN%29%20that%20selects%20the%20optimal%20generalization%20operators%20for%20different%20buildings.%20In%20SNGNN%2C%20node%20and%20edge%20information%20are%20passed%20with%20different%20weights%20through%20two%20modules%20to%20simulate%20the%20effects%20of%20a%20building%20on%20itself%20and%20the%20neighborhood%20on%20either%20side.%20SNGNN%20has%20been%20experimentally%20validated%20using%20sample%20datasets%20for%20Ningbo%2C%20China%2C%20at%201%3A10%2C000%20and%201%3A25%2C000.%20The%20F1-score%20of%20the%20testing%20dataset%20was%2094.19%25%2C%20and%20the%20classification%20precision%20of%20each%20operator%20was%20%5Cu226587%25.%20Compared%20with%20other%20popular%20intelligent%20algorithms%2C%20the%20experimental%20results%20for%20SNGNN%20revealed%20better%20performance%20in%20determining%20the%20optimal%20generalization%20operators.%22%2C%22date%22%3A%222024-01-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F10106049.2024.2306265%22%2C%22ISSN%22%3A%221010-6049%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10106049.2024.2306265%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-05T11%3A29%3A08Z%22%7D%7D%2C%7B%22key%22%3A%22PIM5X79V%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhou%20et%20al.%22%2C%22parsedDate%22%3A%222023-08-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZhou%2C%20Z.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0924271623001697%26%23039%3B%26gt%3BMove%20and%20remove%3A%20Multi-task%20learning%20for%20building%20simplification%20in%20vector%20maps%20with%20a%20graph%20convolutional%20neural%20network%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Move%20and%20remove%3A%20Multi-task%20learning%20for%20building%20simplification%20in%20vector%20maps%20with%20a%20graph%20convolutional%20neural%20network%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhiyong%22%2C%22lastName%22%3A%22Zhou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cheng%22%2C%22lastName%22%3A%22Fu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robert%22%2C%22lastName%22%3A%22Weibel%22%7D%5D%2C%22abstractNote%22%3A%22Simplification%20of%20building%20footprints%20is%20an%20essential%20task%20in%20topographic%20map%20generalization%20from%20large%20to%20medium%20scales.%20The%20traditional%20rule-%20or%20constraint-based%20algorithms%20commonly%20require%20cartographers%20to%20enumerate%20and%20formalize%20as%20many%20scenarios%20as%20possible.%20Recently%2C%20some%20studies%20have%20introduced%20deep%20learning%20to%20image%20map%20generalization%2C%20whose%20outputs%2C%20however%2C%20may%20exhibit%20deformed%20boundaries%20due%20to%20pure%20image%20input.%20Vector%20maps%20are%20thus%20a%20reasonable%20solution%20to%20avoid%20such%20issues%20because%20of%20their%20accurate%2C%20object-based%20geometric%20representation.%20However%2C%20few%20existing%20studies%20have%20aimed%20to%20simplify%20buildings%20in%20vector%20maps%20with%20the%20help%20of%20neural%20networks.%20Building%20simplification%20in%20vector%20maps%20can%20be%20regarded%20as%20the%20joint%20contribution%20from%20two%20elementary%20operations%20on%20vertices%20of%20building%20polygons%3A%20remove%20redundant%20vertices%20and%20move%20kept%20vertices.%20This%20research%20proposes%20a%20multi-task%20learning%20method%20with%20graph%20convolutional%20neural%20networks.%20The%20proposed%20method%20formulates%20the%20building%20simplification%20problem%20as%20a%20joint%20task%20of%20node%20removal%20classification%20and%20node%20movement%20regression.%20A%20multi-task%20graph%20convolutional%20neural%20network%20model%20%28MT_GCNN%29%20is%20developed%20to%20learn%20node%20removal%20and%20movement%20simultaneously.%20The%20model%20was%20evaluated%20with%20a%20map%20from%20Stuttgart%2C%20Germany%20that%20contains%208494%20buildings%20generalized%20from%20the%20source%20scale%20of%201%3A5%2C000%20to%20the%20target%20scale%20of%201%3A10%2C000.%20The%20experimental%20results%20show%20that%20the%20proposed%20method%20can%20generate%2080%25%20of%20the%20buildings%20with%20positional%20errors%20of%20less%20than%200.2%20m%2C%2095%25%20with%20a%20shape%20difference%20under%200.5%2C%20and%20around%2098%25%20with%20an%20area%20difference%20under%200.1%20of%20IoU%2C%20compared%20to%20the%20ground%20truth%20target%20buildings%2C%20thus%20demonstrating%20the%20feasibility%20of%20the%20proposed%20method.%20The%20code%20is%20available%20at%3A%20https%3A%5C%2F%5C%2Fgithub.com%5C%2Fchouisgiser%5C%2FMapGeneralizer.%22%2C%22date%22%3A%222023-08-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.isprsjprs.2023.06.004%22%2C%22ISSN%22%3A%220924-2716%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0924271623001697%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-09-07T20%3A44%3A36Z%22%7D%7D%2C%7B%22key%22%3A%22BAWBLWXJ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Courtial%20et%20al.%22%2C%22parsedDate%22%3A%222022-06-10%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BCourtial%2C%20A.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fagile-giss.copernicus.org%5C%2Farticles%5C%2F3%5C%2F32%5C%2F2022%5C%2F%26%23039%3B%26gt%3BRepresenting%20Vector%20Geographic%20Information%20As%20a%20Tensor%20for%20Deep%20Learning%20Based%20Map%20Generalisation%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Representing%20Vector%20Geographic%20Information%20As%20a%20Tensor%20for%20Deep%20Learning%20Based%20Map%20Generalisation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Azelle%22%2C%22lastName%22%3A%22Courtial%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guillaume%22%2C%22lastName%22%3A%22Touya%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiang%22%2C%22lastName%22%3A%22Zhang%22%7D%5D%2C%22abstractNote%22%3A%22Recently%2C%20many%20researchers%20tried%20to%20generate%20%28generalised%29%20maps%20using%20deep%20learning%2C%20and%20most%20of%20the%20proposed%20methods%20deal%20with%20deep%20neural%20network%20architecture%20choices.%20Deep%20learning%20learns%20to%20reproduce%20examples%2C%20so%20we%20think%20that%20improving%20the%20training%20examples%2C%20and%20especially%20the%20representation%20of%20the%20initial%20geographic%20information%2C%20is%20the%20key%20issue%20for%20this%20problem.%20Our%20article%20extracts%20some%20representation%20issues%20from%20a%20literature%20review%20and%20proposes%20different%20ways%20to%20represent%20vector%20geographic%20information%20as%20a%20tensor.We%20propose%20two%20kinds%20of%20contributions%3A%201%29%20the%20representation%20of%20information%20by%20layers%3B%202%29%20the%20representation%20of%20additional%20information.%20Then%2C%20we%20demonstrate%20the%20interest%20of%20some%20of%20our%20propositions%20with%20experiments%20that%20show%20a%20visual%20improvement%20for%20the%20generation%20of%20generalised%20topographic%20maps%20in%20urban%20areas.%22%2C%22date%22%3A%222022%5C%2F06%5C%2F10%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.5194%5C%2Fagile-giss-3-32-2022%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fagile-giss.copernicus.org%5C%2Farticles%5C%2F3%5C%2F32%5C%2F2022%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A48%3A06Z%22%7D%7D%2C%7B%22key%22%3A%22X7VCA2LP%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yan%20et%20al.%22%2C%22parsedDate%22%3A%222022-02-20%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYan%2C%20X.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Fxb.chinasmp.com%5C%2Farticle%5C%2F2022%5C%2F1001-1595%5C%2F2022-2-269.htm%26%23039%3B%26gt%3BAn%20adaptive%20building%20simplification%20approach%20based%20on%20shape%20analysis%20and%20representation%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22An%20adaptive%20building%20simplification%20approach%20based%20on%20shape%20analysis%20and%20representation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiongfeng%22%2C%22lastName%22%3A%22Yan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tuo%22%2C%22lastName%22%3A%22Yuan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kong%22%2C%22lastName%22%3A%22Bo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pengcheng%22%2C%22lastName%22%3A%22Liu%22%7D%5D%2C%22abstractNote%22%3A%22Building%20simplification%20is%20one%20of%20the%20long-standing%20challenges%20in%20cartography.%20Establishing%20a%20hybrid%20simplification%20mechanism%20based%20on%20shape%20characteristics%20is%20an%20effective%20strategy%20to%20adapt%20to%20the%20diversity%20and%20complexity%20of%20building%20shapes.%20However%2C%20existing%20studies%20mainly%20focus%20on%20local%20structure%20analysis%20or%20simplified%20result%20evaluation%2C%20lacking%20analytical%20perspective%20and%20deep%20understanding%20of%20the%20overall%20shapes.%20This%20study%20proposed%20a%20shape-adaptive%20building%20simplification%20approach%20using%20deep%20learning.%20First%2C%20a%20graph%20convolutional%20autoencoder%20was%20designed%20to%20encode%20the%20shape%20features%20implicated%20in%20the%20boundary%20of%20each%20building.%20Then%2C%20the%20mapping%20relationship%20between%20the%20shape%20encodings%20and%20four%20candidate%20simplification%20algorithms%20was%20established%20using%20a%20supervised%20learning%20model%2C%20so%20as%20to%20realize%20an%20adaptive%20mechanism%20of%20selecting%20the%20appropriate%20simplification%20algorithm%20according%20to%20the%20shape%20characteristics%20of%20the%20input%20building.%20Experimental%20results%20show%20that%20our%20approach%20performs%20better%20than%20the%20standalone%20application%20of%20existing%20algorithms%20in%20measuring%20the%20changes%20of%20position%2C%20orientation%2C%20area%2C%20and%20shape%2C%20and%20have%20good%20theoretical%20and%20practical%20significance.%22%2C%22date%22%3A%222022-02-20%22%2C%22language%22%3A%22cn%22%2C%22DOI%22%3A%2210.11947%5C%2Fj.AGCS.2022.20210302%22%2C%22ISSN%22%3A%222021-0302%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fxb.chinasmp.com%5C%2Farticle%5C%2F2022%5C%2F1001-1595%5C%2F2022-2-269.htm%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-07-31T16%3A35%3A33Z%22%7D%7D%2C%7B%22key%22%3A%22TZLWDE8B%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yang%20et%20al.%22%2C%22parsedDate%22%3A%222022-02-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYang%2C%20M.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2021.1873998%26%23039%3B%26gt%3BA%20hybrid%20approach%20to%20building%20simplification%20with%20an%20evaluator%20from%20a%20backpropagation%20neural%20network%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20hybrid%20approach%20to%20building%20simplification%20with%20an%20evaluator%20from%20a%20backpropagation%20neural%20network%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tuo%22%2C%22lastName%22%3A%22Yuan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiongfeng%22%2C%22lastName%22%3A%22Yan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tinghua%22%2C%22lastName%22%3A%22Ai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chenjun%22%2C%22lastName%22%3A%22Jiang%22%7D%5D%2C%22abstractNote%22%3A%22Research%20has%20developed%20numerous%20algorithms%20to%20simplify%20building%20data.%20Each%20algorithm%20has%20strengths%20and%20weaknesses%20in%20addressing%20shape%20characteristics%2C%20but%20no%20single%20algorithm%20can%20appropriately%20simplify%20all%20buildings.%20This%20study%20proposes%20a%20hybrid%20approach%20that%20identifies%20the%20best%20simplified%20representation%20of%20a%20building%20among%20four%20existing%20algorithms.%20The%20proposed%20approach%20applies%20the%20four%20algorithms%20to%20generate%20simplification%20candidates.%20With%20a%20backpropagation%20neural%20network%2C%20an%20evaluator%20is%20built%20through%20supervised%20learning%20based%20on%20measurements%20describing%20the%20changes%20in%20position%2C%20size%2C%20orientation%2C%20and%20shape%20between%20the%20original%20building%20and%20the%20candidates%20of%20its%20simplified%20representations.%20The%20evaluator%20determines%20the%20most%20appropriate%20candidate.%20Experiments%20on%20buildings%20from%20residential%20and%20commercial%20areas%20in%20Shenzhen%20city%20show%20that%20the%20hybrid%20approach%20can%20combine%20the%20advantages%20of%20different%20algorithms.%20The%20percentages%20of%20unreasonable%20simplified%20buildings%20in%20the%20results%20obtained%20using%20the%20hybrid%20algorithm%20are%203.8%25%20in%20the%20residential%20area%20and%200%20in%20the%20commercial%20area%2C%20respectively%2C%20which%20are%20significantly%20lower%20than%20those%20in%20the%20results%20of%20standalone%20applications%20of%20the%20four%20algorithms.%20Furthermore%2C%20comparison%20with%20the%20simplification%20algorithm%20in%20the%20popular%20software%2C%20ArcGIS%2C%20confirms%20that%20our%20approach%20shows%20better%20results%20in%20terms%20of%20corner%20squaring%20and%20maintaining%20the%20regional%20characteristics%20of%20buildings.%22%2C%22date%22%3A%222022-02-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2021.1873998%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2021.1873998%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-27T13%3A19%3A29Z%22%7D%7D%2C%7B%22key%22%3A%228XFZHMEL%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Courtial%20et%20al.%22%2C%22parsedDate%22%3A%222021-06-30%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BCourtial%2C%20A.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.int-arch-photogramm-remote-sens-spatial-inf-sci.net%5C%2FXLIII-B4-2021%5C%2F15%5C%2F2021%5C%2F%26%23039%3B%26gt%3BGenerative%20adversarial%20networks%20to%20generalise%20urban%20areas%20in%20topographic%20maps%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Generative%20adversarial%20networks%20to%20generalise%20urban%20areas%20in%20topographic%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22A.%22%2C%22lastName%22%3A%22Courtial%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22G.%22%2C%22lastName%22%3A%22Touya%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22X.%22%2C%22lastName%22%3A%22Zhang%22%7D%5D%2C%22abstractNote%22%3A%22This%20article%20presents%20how%20a%20generative%20adversarial%20network%20%28GAN%29%20can%20be%20employed%20to%20produce%20a%20generalised%20map%20that%20combines%20several%20cartographic%20themes%20in%20the%20dense%20context%20of%20urban%20areas.%20We%20use%20as%20input%20detailed%20buildings%2C%20roads%2C%20and%20rivers%20from%20topographic%20datasets%20produced%20by%20the%20French%20national%20mapping%20agency%20%28IGN%29%2C%20and%20we%20expect%20as%20output%20of%20the%20GAN%20a%20legible%20map%20of%20these%20elements%20at%20a%20target%20scale%20of%201%3A50%2C000.%20This%20level%20of%20detail%20requires%20to%20reduce%20the%20amount%20of%20information%20while%20preserving%20patterns%3B%20covering%20dense%20inner%20cities%20block%20by%20a%20unique%20polygon%20is%20also%20necessary%20because%20these%20blocks%20cannot%20be%20represented%20with%20enlarged%20individual%20buildings.%20The%20target%20map%20has%20a%20style%20similar%20to%20the%20topographic%20map%20produced%20by%20IGN.%20This%20experiment%20succeeded%20in%20producing%20image%20tiles%20that%20look%20like%20legible%20maps.%20It%20also%20highlights%20the%20impact%20of%20data%20and%20representation%20choices%20on%20the%20quality%20of%20predicted%20images%2C%20and%20the%20challenge%20of%20learning%20geographic%20relationships.%22%2C%22date%22%3A%222021%5C%2F06%5C%2F30%22%2C%22proceedingsTitle%22%3A%22The%20International%20Archives%20of%20the%20Photogrammetry%2C%20Remote%20Sensing%20and%20Spatial%20Information%20Sciences%22%2C%22conferenceName%22%3A%22XXIV%20ISPRS%20Congress%20%3Cq%3EImaging%20today%2C%20foreseeing%20tomorrow%3C%5C%2Fq%3E%2C%20Commission%20IV%20-%202021%20edition%2C%205%26ndash%3B9%20July%202021%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.5194%5C%2Fisprs-archives-XLIII-B4-2021-15-2021%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.int-arch-photogramm-remote-sens-spatial-inf-sci.net%5C%2FXLIII-B4-2021%5C%2F15%5C%2F2021%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-07-31T16%3A22%3A23Z%22%7D%7D%2C%7B%22key%22%3A%22LXM87X5J%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wu%20et%20al.%22%2C%22parsedDate%22%3A%222019-07-10%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWu%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.proc-int-cartogr-assoc.net%5C%2F2%5C%2F147%5C%2F2019%5C%2F%26%23039%3B%26gt%3BApplication%20of%20Deep%20Learning%20for%203D%20building%20generalization%26lt%3B%5C%2Fa%26gt%3B.%202019%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Application%20of%20Deep%20Learning%20for%203D%20building%20generalization%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yue%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yevgeniya%22%2C%22lastName%22%3A%22Filippovska%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Valentina%22%2C%22lastName%22%3A%22Schmidt%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Martin%22%2C%22lastName%22%3A%22Kada%22%7D%5D%2C%22abstractNote%22%3A%22The%20generalization%20of%203D%20buildings%20is%20a%20challenging%20task%2C%20which%20needs%20to%20consider%20geometry%20information%2C%20semantic%20content%20and%20topology%20relations%20of%203D%20buildings.%20Although%20many%20algorithms%20with%20detailed%20and%20reasonable%20designs%20have%20been%20developed%20for%20the%203D%20building%20generalization%2C%20there%20are%20still%20cases%20that%20could%20be%20further%20studied.%20As%20a%20fast-growing%20technique%2C%20Deep%20Learning%20has%20shown%20its%20ability%20to%20build%20complex%20concepts%20out%20of%20simpler%20concepts%20in%20many%20fields.%20Therefore%2C%20in%20this%20paper%2C%20Deep%20Learning%20is%20used%20to%20solve%20the%20regression%20%28generalization%20of%20individual%203D%20building%29%20and%20classification%20problems%20%28classification%20of%20roof%20type%29%20simultaneously.%20Firstly%2C%20the%20test%20dataset%20is%20generated%20and%20labelled%20with%20the%20generalization%20results%20as%20well%20as%20the%20classification%20of%20roof%20types.%20Buildings%20with%20saddleback%2C%20half-hip%2C%20and%20hip%20roof%20are%20selected%20as%20the%20experimental%20objects%20since%20their%20generalization%20results%20can%20be%20uniformly%20represented%20by%20a%20common%20vector%20which%20aims%20to%20meet%20the%20compatible%20representation%20of%20Deep%20Learning.%20Then%2C%20the%20pre-trained%20ResNet50%20is%20used%20as%20the%20baseline%20network.%20The%20optimal%20model%20capacity%20is%20searched%20within%20an%20extensive%20ablation%20study%20in%20the%20framework%20of%20the%20building%20generalization%20problem.%20After%20that%2C%20a%20multi-task%20network%20is%20built%20by%20adding%20a%20branch%20of%20classification%20to%20the%20above%20network%2C%20which%20is%20in%20parallel%20with%20the%20generalization%20branch.%20In%20the%20process%20of%20training%2C%20the%20imbalance%20problems%20of%20tasks%20and%20classes%20are%20solved%20by%20adjusting%20their%20donations%20to%20the%20total%20loss%20function.%20It%20is%20found%20that%20less%20error%20rate%20is%20obtained%20after%20adding%20a%20classification%20branch.%20For%20the%20final%20results%2C%20two%20improved%20metrics%20are%20used%20to%20evaluate%20the%20generalization%20performance.%20The%20accuracy%20of%20generalization%20reached%20over%2095%25%20for%20horizontal%20information%20and%2085%25%20for%20height%2C%20while%20the%20accuracy%20of%20classification%20reached%20100%25%20on%20the%20test%20dataset.%22%2C%22date%22%3A%222019%5C%2F07%5C%2F10%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.5194%5C%2Fica-proc-2-147-2019%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.proc-int-cartogr-assoc.net%5C%2F2%5C%2F147%5C%2F2019%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-07-31T16%3A34%3A24Z%22%7D%7D%2C%7B%22key%22%3A%22BFMCHSK4%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Feng%20et%20al.%22%2C%22parsedDate%22%3A%222019-06%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BFeng%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F8%5C%2F6%5C%2F258%26%23039%3B%26gt%3BLearning%20Cartographic%20Building%20Generalization%20with%20Deep%20Convolutional%20Neural%20Networks%26lt%3B%5C%2Fa%26gt%3B.%202019%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Learning%20Cartographic%20Building%20Generalization%20with%20Deep%20Convolutional%20Neural%20Networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yu%22%2C%22lastName%22%3A%22Feng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Frank%22%2C%22lastName%22%3A%22Thiemann%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Monika%22%2C%22lastName%22%3A%22Sester%22%7D%5D%2C%22abstractNote%22%3A%22Cartographic%20generalization%20is%20a%20problem%2C%20which%20poses%20interesting%20challenges%20to%20automation.%20Whereas%20plenty%20of%20algorithms%20have%20been%20developed%20for%20the%20different%20sub-problems%20of%20generalization%20%28e.g.%2C%20simplification%2C%20displacement%2C%20aggregation%29%2C%20there%20are%20still%20cases%2C%20which%20are%20not%20generalized%20adequately%20or%20in%20a%20satisfactory%20way.%20The%20main%20problem%20is%20the%20interplay%20between%20different%20operators.%20In%20those%20cases%20the%20human%20operator%20is%20the%20benchmark%2C%20who%20is%20able%20to%20design%20an%20aesthetic%20and%20correct%20representation%20of%20the%20physical%20reality.%20Deep%20learning%20methods%20have%20shown%20tremendous%20success%20for%20interpretation%20problems%20for%20which%20algorithmic%20methods%20have%20deficits.%20A%20prominent%20example%20is%20the%20classification%20and%20interpretation%20of%20images%2C%20where%20deep%20learning%20approaches%20outperform%20traditional%20computer%20vision%20methods.%20In%20both%20domains-computer%20vision%20and%20cartography-humans%20are%20able%20to%20produce%20good%20solutions.%20A%20prerequisite%20for%20the%20application%20of%20deep%20learning%20is%20the%20availability%20of%20many%20representative%20training%20examples%20for%20the%20situation%20to%20be%20learned.%20As%20this%20is%20given%20in%20cartography%20%28there%20are%20many%20existing%20map%20series%29%2C%20the%20idea%20in%20this%20paper%20is%20to%20employ%20deep%20convolutional%20neural%20networks%20%28DCNNs%29%20for%20cartographic%20generalizations%20tasks%2C%20especially%20for%20the%20task%20of%20building%20generalization.%20Three%20network%20architectures%2C%20namely%20U-net%2C%20residual%20U-net%20and%20generative%20adversarial%20network%20%28GAN%29%2C%20are%20evaluated%20both%20quantitatively%20and%20qualitatively%20in%20this%20paper.%20They%20are%20compared%20based%20on%20their%20performance%20on%20this%20task%20at%20target%20map%20scales%201%3A10%2C000%2C%201%3A15%2C000%20and%201%3A25%2C000%2C%20respectively.%20The%20results%20indicate%20that%20deep%20learning%20models%20can%20successfully%20learn%20cartographic%20generalization%20operations%20in%20one%20single%20model%20in%20an%20implicit%20way.%20The%20residual%20U-net%20outperforms%20the%20others%20and%20achieved%20the%20best%20generalization%20performance.%22%2C%22date%22%3A%222019%5C%2F6%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi8060258%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F8%5C%2F6%5C%2F258%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A04%3A30Z%22%7D%7D%2C%7B%22key%22%3A%22U72GKRKD%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Touya%20et%20al.%22%2C%22parsedDate%22%3A%222019-05-04%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BTouya%2C%20G.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F23729333.2019.1613071%26%23039%3B%26gt%3BIs%20deep%20learning%20the%20new%20agent%20for%20map%20generalization%3F%26lt%3B%5C%2Fa%26gt%3B%202019%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Is%20deep%20learning%20the%20new%20agent%20for%20map%20generalization%3F%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guillaume%22%2C%22lastName%22%3A%22Touya%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiang%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Imran%22%2C%22lastName%22%3A%22Lokhat%22%7D%5D%2C%22abstractNote%22%3A%22The%20automation%20of%20map%20generalization%20has%20been%20keeping%20researchers%20in%20cartography%20busy%20for%20years.%20Particularly%20great%20progress%20was%20made%20in%20the%20late%2090s%20with%20the%20use%20of%20the%20multi-agent%20paradigm.%20Although%20the%20current%20use%20of%20automatic%20processes%20in%20some%20national%20mapping%20agencies%20is%20a%20great%20achievement%2C%20there%20are%20still%20many%20unsolved%20issues%20and%20research%20seems%20to%20stagnate%20in%20the%20recent%20years.%20With%20the%20success%20of%20deep%20learning%20in%20many%20fields%20of%20science%2C%20including%20geographic%20information%20science%2C%20this%20paper%20poses%20the%20controversial%20question%20of%20the%20title%3A%20is%20deep%20learning%20the%20new%20agent%2C%20i.e.%20the%20technique%20that%20will%20make%20generalization%20research%20bridge%20the%20gap%20to%20fully%20automated%20generalization%20processes%3F%20The%20paper%20neither%20responds%20a%20clear%20yes%20nor%20a%20clear%20no%20but%20discusses%20what%20issues%20could%20be%20tackled%20with%20deep%20learning%20and%20what%20the%20promising%20perspectives.%20Some%20preliminary%20experiments%20with%20building%20generalization%20or%20data%20enrichments%20are%20presented%20to%20support%20the%20discussion.%22%2C%22date%22%3A%222019-05-04%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F23729333.2019.1613071%22%2C%22ISSN%22%3A%222372-9333%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F23729333.2019.1613071%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A06%3A34Z%22%7D%7D%2C%7B%22key%22%3A%223K7UMPTK%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Sester%20et%20al.%22%2C%22parsedDate%22%3A%222018-09-19%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSester%2C%20M.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fisprs-archives.copernicus.org%5C%2Farticles%5C%2FXLII-4%5C%2F565%5C%2F2018%5C%2F%26%23039%3B%26gt%3BBUILDING%20GENERALIZATION%20USING%20DEEP%20LEARNING%26lt%3B%5C%2Fa%26gt%3B.%202018%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22BUILDING%20GENERALIZATION%20USING%20DEEP%20LEARNING%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22M.%22%2C%22lastName%22%3A%22Sester%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Y.%22%2C%22lastName%22%3A%22Feng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22F.%22%2C%22lastName%22%3A%22Thiemann%22%7D%5D%2C%22abstractNote%22%3A%22Cartographic%20generalization%20is%20a%20problem%2C%20which%20poses%20interesting%20challenges%20to%20automation.%20Whereas%20plenty%20of%20algorithms%20have%20been%20developed%20for%20the%20different%20sub-problems%20of%20generalization%20%28e.g.%20simplification%2C%20displacement%2C%20aggregation%29%2C%20there%20are%20still%20cases%2C%20which%20are%20not%20generalized%20adequately%20or%20in%20a%20satisfactory%20way.%20The%20main%20problem%20is%20the%20interplay%20between%20different%20operators.%20In%20those%20cases%20the%20benchmark%20is%20the%20human%20operator%2C%20who%20is%20able%20to%20design%20an%20aesthetic%20and%20correct%20representation%20of%20the%20physical%20reality.%5Cn%5CnDeep%20Learning%20methods%20have%20shown%20tremendous%20success%20for%20interpretation%20problems%20for%20which%20algorithmic%20methods%20have%20deficits.%20A%20prominent%20example%20is%20the%20classification%20and%20interpretation%20of%20images%2C%20where%20deep%20learning%20approaches%20outperform%20the%20traditional%20computer%20vision%20methods.%20In%20both%20domains%20%26ndash%3B%20computer%20vision%20and%20cartography%20%26ndash%3B%20humans%20are%20able%20to%20produce%20a%20solution%3B%20a%20prerequisite%20for%20this%20is%2C%20that%20there%20is%20the%20possibility%20to%20generate%20many%20training%20examples%20for%20the%20different%20cases.%20Thus%2C%20the%20idea%20in%20this%20paper%20is%20to%20employ%20Deep%20Learning%20for%20cartographic%20generalizations%20tasks%2C%20especially%20for%20the%20task%20of%20building%20generalization.%20An%20advantage%20of%20this%20task%20is%20the%20fact%20that%20many%20training%20data%20sets%20are%20available%20from%20given%20map%20series.%20The%20approach%20is%20a%20first%20attempt%20using%20an%20existing%20network.%5Cn%5CnIn%20the%20paper%2C%20the%20details%20of%20the%20implementation%20will%20be%20reported%2C%20together%20with%20an%20in%20depth%20analysis%20of%20the%20results.%20An%20outlook%20on%20future%20work%20will%20be%20given.%22%2C%22date%22%3A%222018-09-19%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.5194%5C%2Fisprs-archives-XLII-4-565-2018%22%2C%22ISSN%22%3A%221682-1750%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fisprs-archives.copernicus.org%5C%2Farticles%5C%2FXLII-4%5C%2F565%5C%2F2018%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-28T18%3A50%3A16Z%22%7D%7D%5D%7D
Xiao, T. et al. Map Generalization Method Supported by Graph Convolutional Networks. 2025
Yan, X. et al. Deep learning in automatic map generalization: achievements and challenges. 2025
Courtial, A. et al. DeepMapScaler: a workflow of deep neural networks for the generation of generalised maps. 2024
Courtial, A. et al. Representing Vector Geographic Information As a Tensor for Deep Learning Based Map Generalisation. 2022
Courtial, A. et al. Generative adversarial networks to generalise urban areas in topographic maps. 2021
Wu, Y. et al. Application of Deep Learning for 3D building generalization. 2019
Feng, Y. et al. Learning Cartographic Building Generalization with Deep Convolutional Neural Networks. 2019
Touya, G. et al. Is deep learning the new agent for map generalization? 2019
Sester, M. et al. BUILDING GENERALIZATION USING DEEP LEARNING. 2018
Abstraction
5447768
abstraction
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22L4CKDMYG%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Drews%20et%20al.%22%2C%22parsedDate%22%3A%222025-03-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BDrews%2C%20J.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs42489-024-00183-9%26%23039%3B%26gt%3BA%20New%20AI%20Tool%20for%20the%20Design%20of%20Cartographic%20Pictograms%20%28PictoAI%29%20and%20Its%20Potentials%20for%20Increasing%20Their%20Meaningfulness%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20New%20AI%20Tool%20for%20the%20Design%20of%20Cartographic%20Pictograms%20%28PictoAI%29%20and%20Its%20Potentials%20for%20Increasing%20Their%20Meaningfulness%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jule%22%2C%22lastName%22%3A%22Drews%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Marco%22%2C%22lastName%22%3A%22Wei%5Cu00dfmann%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Julian%22%2C%22lastName%22%3A%22Keil%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Frank%22%2C%22lastName%22%3A%22Dickmann%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dennis%22%2C%22lastName%22%3A%22Edler%22%7D%5D%2C%22abstractNote%22%3A%22This%20study%20introduces%20PictoAI%2C%20a%20custom%20AI%20tool%20developed%20by%20the%20cartographic%20research%20team%20at%20Ruhr%20University%20Bochum%20for%20the%20generation%20of%20cartographic%20pictograms.%20This%20study%20also%20evaluates%20its%20effectiveness%20compared%20to%20traditional%20pictograms%20used%20by%20OpenStreetMap%20%28OSM%29.%20In%20thematic%20cartography%2C%20the%20clarity%20and%20interpretability%20of%20pictograms%20are%20crucial%20for%20effective%20communication%2C%20yet%20user%20interpretation%20can%20differ%20from%20expert-designed%20pictogram%20meanings.%20By%20using%20artificial%20intelligence%2C%20specifically%20a%20custom%20GPT%20model%20integrated%20with%20DALL-E%20by%20OpenAI%2C%20PictoAI%20offers%20an%20approach%20for%20the%20automated%20generation%20of%20visually%20consistent%20and%20thematically%20appropriate%20pictograms.%20An%20empirical%20study%20involving%2070%20participants%20compared%20the%20interpretability%20of%2024%20AI-generated%20pictograms%20with%20the%20equivalent%20OSM%20pictograms.%20Results%20show%20that%20PictoAI-generated%20pictograms%20were%20significantly%20more%20interpretable%2C%20with%20a%20correct%20response%20rate%20of%2067.26%25%2C%20compared%20to%2031.79%25%20for%20OSM%20pictograms.%20The%20study%20highlights%20the%20potential%20of%20graphic%20AI%20in%20enhancing%20cartographic%20communication%20by%20demonstrating%20that%20AI-generated%20pictograms%20can%20significantly%20improve%20interpretability%20and%20efficiency%20in%20thematic%20cartography.%20The%20findings%20also%20underscore%20the%20future%20role%20of%20AI%20in%20automating%20and%20democratizing%20the%20pictogram%20creation%20process%20in%20cartography.%20PictoAI%20is%20already%20accessible%20and%20can%20be%20explored%20as%20a%20Chat-GPT-subscriber%20with%20this%20website%20%28https%3A%5C%2F%5C%2Fchatgpt.com%5C%2Fg%5C%2Fg-1465GB5y0-pictoai%29.%22%2C%22date%22%3A%222025-03-01%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2Fs42489-024-00183-9%22%2C%22ISSN%22%3A%222524-4965%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs42489-024-00183-9%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-06-26T12%3A11%3A33Z%22%7D%7D%2C%7B%22key%22%3A%22ZK9WNS2K%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Karamatsu%20et%20al.%22%2C%22parsedDate%22%3A%222020-06-08%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKaramatsu%2C%20T.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3379173.3393708%26%23039%3B%26gt%3BIconify%3A%20Converting%20Photographs%20into%20Icons%26lt%3B%5C%2Fa%26gt%3B.%202020%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Iconify%3A%20Converting%20Photographs%20into%20Icons%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Takuro%22%2C%22lastName%22%3A%22Karamatsu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gibran%22%2C%22lastName%22%3A%22Benitez-Garcia%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Keiji%22%2C%22lastName%22%3A%22Yanai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Seiichi%22%2C%22lastName%22%3A%22Uchida%22%7D%5D%2C%22abstractNote%22%3A%22In%20this%20paper%2C%20we%20tackle%20a%20challenging%20domain%20conversion%20task%20between%20photo%20and%20icon%20images.%20Although%20icons%20often%20originate%20from%20real%20object%20images%20%28i.e.%2C%20photographs%29%2C%20severe%20abstractions%20and%20simplifications%20are%20applied%20to%20generate%20icon%20images%20by%20professional%20graphic%20designers.%20Moreover%2C%20there%20is%20no%20one-to-one%20correspondence%20between%20the%20two%20domains%2C%20for%20this%20reason%20we%20cannot%20use%20it%20as%20the%20ground-truth%20for%20learning%20a%20direct%20conversion%20function.%20Since%20generative%20adversarial%20networks%20%28GAN%29%20can%20undertake%20the%20problem%20of%20domain%20conversion%20without%20any%20correspondence%2C%20we%20test%20CycleGAN%20and%20UNIT%20to%20generate%20icons%20from%20objects%20segmented%20from%20photo%20images.%20Our%20experiments%20with%20several%20image%20datasets%20prove%20that%20CycleGAN%20learns%20sufficient%20abstraction%20and%20simplification%20ability%20to%20generate%20icon-like%20images.%22%2C%22date%22%3A%22Juni%208%2C%202020%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%202020%20Joint%20Workshop%20on%20Multimedia%20Artworks%20Analysis%20and%20Attractiveness%20Computing%20in%20Multimedia%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3379173.3393708%22%2C%22ISBN%22%3A%22978-1-4503-7137-7%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3379173.3393708%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T18%3A29%3A19Z%22%7D%7D%5D%7D
Karamatsu, T. et al. Iconify: Converting Photographs into Icons. 2020
Displacement (Labels)
5447768
displacement, labels
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%223V2RYURT%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Liu%20et%20al.%22%2C%22parsedDate%22%3A%222025-08-03%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLiu%2C%20P.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2025.2466113%26%23039%3B%26gt%3BA%20POI%20selection%20method%20based%20on%20GCN%20considering%20annotation%20conflicts%20during%20map%20scale%20transformation%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20POI%20selection%20method%20based%20on%20GCN%20considering%20annotation%20conflicts%20during%20map%20scale%20transformation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pengcheng%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hao%22%2C%22lastName%22%3A%22Han%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shuo%22%2C%22lastName%22%3A%22Zhang%22%7D%5D%2C%22abstractNote%22%3A%22Points%20of%20interest%20%28POIs%29%20are%20critical%20components%20of%20maps%2C%20often%20accompanied%20by%20annotations%20that%20convey%20essential%20information.%20However%2C%20when%20maps%20are%20scaled%20down%2C%20annotation%20sizes%20generally%20remain%20unchanged%2C%20frequently%20resulting%20in%20overlaps%20and%20conflicts%20that%20compromise%20map%20clarity.%20To%20address%20this%20issue%2C%20this%20article%20incorporates%20annotation%20conflicts%20as%20constraints%20within%20the%20POI%20selection%20decision-making%20process%20during%20scale%20transformations.%20It%20presents%20a%20novel%20POI%20selection%20method%20using%20a%20graph%20convolutional%20network%20%28GCN%29%20considering%20annotation%20conflicts.%20The%20proposed%20method%20begins%20by%20constructing%20a%20graph%20structure%20based%20on%20Delaunay%20triangulation%2C%20which%20represents%20second-order%20proximity%20relationships.%20Node%20features%20are%20then%20extracted%20from%20three%20dimensions%3A%20semantic%2C%20spatial%20and%20annotation%2C%20while%20annotation%20conflicts%20are%20abstracted%20as%20edge%20weights%20within%20the%20graph.%20Leveraging%20the%20TAGCN%20network%2C%20a%20POI%20selection%20model%20is%20developed%2C%20enabling%20intelligent%20POI%20selection%20through%20semi-supervised%20training.%20This%20approach%20transforms%20the%20POI%20selection%20task%20into%20a%20classification%20problem%2C%20seamlessly%20integrating%20expert%20knowledge%20into%20the%20deep%20learning%20framework%20through%20samples%20and%20models.%20Moreover%2C%20this%20article%20introduces%20an%20annotation%20placement%20algorithm%20to%20further%20mitigate%20annotation%20conflicts.%20Experimental%20results%20demonstrate%20that%20this%20method%20effectively%20reduces%20annotation%20conflicts%20while%20preserving%20map%20details%2C%20outperforming%20Maplex%20in%20ArcGIS.%22%2C%22date%22%3A%222025-08-03%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2025.2466113%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2025.2466113%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-26T18%3A56%3A17Z%22%7D%7D%2C%7B%22key%22%3A%227XEPSTPM%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhang%20et%20al.%22%2C%22parsedDate%22%3A%222025-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZhang%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F14%5C%2F2%5C%2F88%26%23039%3B%26gt%3BAutomatic%20Annotation%20of%20Map%20Point%20Features%20Based%20on%20Deep%20Learning%20ResNet%20Models%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Automatic%20Annotation%20of%20Map%20Point%20Features%20Based%20on%20Deep%20Learning%20ResNet%20Models%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yaolin%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhiwen%22%2C%22lastName%22%3A%22Qin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jingsong%22%2C%22lastName%22%3A%22Ma%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qian%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiaolong%22%2C%22lastName%22%3A%22Wang%22%7D%5D%2C%22abstractNote%22%3A%22Point%20feature%20cartographic%20label%20placement%20is%20a%20key%20problem%20in%20the%20automatic%20configuration%20of%20map%20labeling.%20Prior%20research%20on%20it%20only%20addresses%20label%20conflict%20or%20overlap%20issues%3B%20it%20does%20not%20fully%20take%20into%20account%20and%20resolve%20both%20types%20of%20issues.%20In%20this%20study%2C%20we%20attempt%20to%20apply%20machine%20learning%20techniques%20to%20the%20automatic%20placement%20of%20point%20feature%20labels%20since%20label%20placement%20is%20a%20task%20that%20heavily%20relies%20on%20expert%20expertise%2C%20which%20is%20very%20congruent%20with%20neural%20networks%5Cu2019%20ability%20to%20mimic%20the%20human%20brain%5Cu2019s%20thought%20process.%20We%20trained%20ResNet%20using%20large%20amounts%20of%20well-labeled%20picture%20data.%20The%20label%5Cu2019s%20proper%20location%20for%20a%20given%20unlabeled%20point%20feature%20was%20then%20predicted%20by%20the%20trained%20model.%20We%20assessed%20the%20outcomes%20both%20quantitatively%20and%20qualitatively%2C%20contrasting%20the%20ResNet%20model%5Cu2019s%20output%20with%20that%20of%20the%20expert%20manual%20placement%20approach%20and%20the%20conventional%20Maplex%20automatic%20placement%20method.%20According%20to%20the%20evaluation%2C%20the%20ResNet%20model%5Cu2019s%20test%20set%20accuracy%20was%2097.08%25%2C%20demonstrating%20its%20ability%20to%20locate%20the%20point%20feature%20label%20in%20the%20right%20place.%20This%20study%20offers%20a%20workable%20solution%20to%20the%20label%20overlap%20and%20conflict%20issue.%20Simultaneously%2C%20it%20has%20significantly%20enhanced%20the%20map%5Cu2019s%20esthetic%20appeal%20and%20the%20information%5Cu2019s%20clarity.%22%2C%22date%22%3A%222025%5C%2F2%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi14020088%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F14%5C%2F2%5C%2F88%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-19T19%3A04%3A54Z%22%7D%7D%2C%7B%22key%22%3A%22IW5QDTPF%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yu%20et%20al.%22%2C%22parsedDate%22%3A%222025%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYu%2C%20H.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1111%5C%2Ftgis.70080%26%23039%3B%26gt%3BA%20SegNet-Based%20Approach%20for%20Road%20Label%20Placement%20Integrating%20Geometric%20and%20Textual%20Information%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20SegNet-Based%20Approach%20for%20Road%20Label%20Placement%20Integrating%20Geometric%20and%20Textual%20Information%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Huafei%22%2C%22lastName%22%3A%22Yu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tinghua%22%2C%22lastName%22%3A%22Ai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rachid%22%2C%22lastName%22%3A%22Oucheikh%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bo%22%2C%22lastName%22%3A%22Kong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hao%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhenyu%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lars%22%2C%22lastName%22%3A%22Harrie%22%7D%5D%2C%22abstractNote%22%3A%22As%20a%20crucial%20tool%20for%20enhancing%20the%20readability%20and%20comprehensibility%20of%20geoinformation%2C%20automated%20label%20placement%20in%20mapping%20applications%20remains%20a%20significant%20challenge%2C%20particularly%20when%20generalizing%20the%20label%20placement%20process%20for%20high-density%20maps%2C%20despite%20the%20availability%20of%20tools%20such%20as%20QGIS-PAL%20and%20ArcGIS-Maplex%20label%20engine.%20This%20study%20focuses%20on%20utilizing%20deep%20learning%20%28DL%29%20for%20road%20labeling%20tasks%20and%20addresses%20two%20key%20questions%3A%20Can%20DL%20models%20predict%20the%20quantity%20and%20shape%20of%20road%20labels%3F%20Can%20they%20determine%20the%20label%20positions%3F%20Our%20proposed%20SegNet-based%20model%20employed%20%5Cu201cwhere%5Cu201d%20and%20%5Cu201cwhat%5Cu201d%20modules%2C%20integrating%20geometric%20contextual%20information%20with%20textual%20data%20as%20inputs.%20We%20validated%20the%20model%20using%20London%2C%20UK%20wayfinding%20map%20data%2C%20demonstrating%20improved%20readability%20and%20achieving%20comparable%20machine%20learning%20evaluation%20metrics%20to%20mainstream%20labeling%20tools.%20Notably%2C%20our%20method%20accurately%20predicted%20label%20placements%20for%20roads%20and%20ensured%20consistent%20label%20sizes.%20This%20study%20provides%20valuable%20insights%20and%20recommendations%20for%20leveraging%20DL%20techniques%20to%20alleviate%20labor-intensive%20challenges%20of%20map%20labeling.%22%2C%22date%22%3A%222025%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1111%5C%2Ftgis.70080%22%2C%22ISSN%22%3A%221467-9671%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1111%5C%2Ftgis.70080%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-19T19%3A05%3A45Z%22%7D%7D%2C%7B%22key%22%3A%22G6FLZTU4%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wu%20et%20al.%22%2C%22parsedDate%22%3A%222024-08%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWu%2C%20Z.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F13%5C%2F8%5C%2F294%26%23039%3B%26gt%3BAn%20Improved%20ANN-Based%20Label%20Placement%20Method%20Considering%20Surrounding%20Features%20for%20Schematic%20Metro%20Maps%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22An%20Improved%20ANN-Based%20Label%20Placement%20Method%20Considering%20Surrounding%20Features%20for%20Schematic%20Metro%20Maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhiwei%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tian%22%2C%22lastName%22%3A%22Lan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chenzhen%22%2C%22lastName%22%3A%22Sun%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Donglin%22%2C%22lastName%22%3A%22Cheng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xing%22%2C%22lastName%22%3A%22Shi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Meisheng%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guangjun%22%2C%22lastName%22%3A%22Zeng%22%7D%5D%2C%22abstractNote%22%3A%22On%20schematic%20metro%20maps%2C%20high-quality%20label%20placement%20is%20helpful%20to%20passengers%20performing%20route%20planning%20and%20orientation%20tasks.%20It%20has%20been%20reported%20that%20the%20artificial%20neural%20network%20%28ANN%29%20has%20the%20potential%20to%20place%20labels%20with%20learned%20labeling%20knowledge.%20However%2C%20the%20previous%20ANN-based%20method%20only%20considered%20the%20effects%20of%20station%20points%20and%20their%20connected%20edges.%20Indeed%2C%20unconnected%20but%20surrounding%20features%20%28points%2C%20edges%2C%20and%20labels%29%20also%20significantly%20affect%20the%20quality%20of%20label%20placement.%20To%20address%20this%2C%20we%20have%20proposed%20an%20improved%20method.%20The%20relations%20between%20label%20positions%20and%20both%20connected%20and%20surrounding%20features%20are%20first%20modeled%20based%20on%20labeling%20natural%20intelligence%20%28i.e.%2C%20the%20experience%2C%20knowledge%2C%20and%20rules%20of%20labeling%20established%20by%20cartographers%29.%20Then%2C%20ANN%20is%20employed%20to%20learn%20such%20relations.%20Quantitative%20evaluations%20show%20that%20our%20method%20reaches%20lower%20percentages%20of%20label%5Cu2013point%20overlap%20%280.00%25%29%2C%20label%5Cu2013edge%20overlap%20%284.12%25%29%2C%20and%20label%5Cu2013label%20overlap%20%2820.58%25%29%20compared%20to%20the%20benchmark%20%284.17%25%2C%2014.29%25%2C%20and%2035.11%25%2C%20respectively%29.%20On%20the%20other%20hand%2C%20our%20method%20effectively%20avoids%20ambiguous%20labels%20and%20ensures%20labels%20from%20the%20same%20line%20are%20placed%20on%20the%20same%20side.%20Qualitative%20evaluations%20show%20that%20approximately%2075%25%20of%20users%20prefer%20our%20results.%20This%20novel%20method%20has%20the%20potential%20to%20advance%20the%20automated%20generation%20of%20schematic%20metro%20maps.%22%2C%22date%22%3A%222024%5C%2F8%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi13080294%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F13%5C%2F8%5C%2F294%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-12T22%3A42%3A47Z%22%7D%7D%2C%7B%22key%22%3A%22S89JBAHI%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Oucheikh%20and%20Harrie%22%2C%22parsedDate%22%3A%222024-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BOucheikh%2C%20R.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2291051%26%23039%3B%26gt%3BA%20feasibility%20study%20of%20applying%20generative%20deep%20learning%20models%20for%20map%20labeling%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20feasibility%20study%20of%20applying%20generative%20deep%20learning%20models%20for%20map%20labeling%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rachid%22%2C%22lastName%22%3A%22Oucheikh%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lars%22%2C%22lastName%22%3A%22Harrie%22%7D%5D%2C%22abstractNote%22%3A%22The%20automation%20of%20map%20labeling%20is%20an%20ongoing%20research%20challenge.%20Currently%2C%20the%20map%20labeling%20algorithms%20are%20based%20on%20rules%20defined%20by%20experts%20for%20optimizing%20the%20placement%20of%20the%20text%20labels%20on%20maps.%20In%20this%20paper%2C%20we%20investigate%20the%20feasibility%20of%20using%20well-labeled%20map%20samples%20as%20a%20source%20of%20knowledge%20for%20automating%20the%20labeling%20process.%20The%20basic%20idea%20is%20to%20train%20deep%20learning%20models%2C%20specifically%20the%20generative%20models%20CycleGAN%20and%20Pix2Pix%2C%20on%20a%20large%20number%20of%20map%20examples.%20Then%2C%20the%20trained%20models%20are%20used%20to%20predict%20good%20locations%20of%20the%20labels%20given%20unlabeled%20raster%20maps.%20We%20compare%20the%20results%20obtained%20by%20the%20deep%20learning%20models%20to%20manual%20map%20labeling%20and%20a%20state-of-the-art%20optimization-based%20labeling%20method.%20A%20quantitative%20evaluation%20is%20performed%20in%20terms%20of%20legibility%2C%20association%20and%20map%20readability%20as%20well%20as%20a%20visual%20evaluation%20performed%20by%20three%20professional%20cartographers.%20The%20evaluation%20indicates%20that%20the%20deep%20learning%20models%20are%20capable%20of%20finding%20appropriate%20positions%20for%20the%20labels%2C%20but%20that%20they%2C%20in%20this%20implementation%2C%20are%20not%20well%20suited%20for%20selecting%20the%20labels%20to%20show%20and%20to%20determine%20the%20size%20of%20the%20labels.%20The%20result%20provides%20valuable%20insights%20into%20the%20current%20capabilities%20of%20generative%20models%20for%20such%20task%2C%20while%20also%20identifying%20the%20key%20challenges%20that%20will%20shape%20future%20research%20directions.%22%2C%22date%22%3A%222024-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2023.2291051%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2291051%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-18T12%3A59%3A28Z%22%7D%7D%2C%7B%22key%22%3A%22YVB4IF8V%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Harrie%20et%20al.%22%2C%22parsedDate%22%3A%222022-05-25%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BHarrie%2C%20L.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs41651-022-00115-z%26%23039%3B%26gt%3BLabel%20Placement%20Challenges%20in%20City%20Wayfinding%20Map%20Production%5Cu2014Identification%20and%20Possible%20Solutions%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Label%20Placement%20Challenges%20in%20City%20Wayfinding%20Map%20Production%5Cu2014Identification%20and%20Possible%20Solutions%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lars%22%2C%22lastName%22%3A%22Harrie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rachid%22%2C%22lastName%22%3A%22Oucheikh%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22%5Cu00c5sa%22%2C%22lastName%22%3A%22Nilsson%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Andreas%22%2C%22lastName%22%3A%22Oxenstierna%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pontus%22%2C%22lastName%22%3A%22Cederholm%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lai%22%2C%22lastName%22%3A%22Wei%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kai-Florian%22%2C%22lastName%22%3A%22Richter%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Perola%22%2C%22lastName%22%3A%22Olsson%22%7D%5D%2C%22abstractNote%22%3A%22Map%20label%20placement%20is%20an%20important%20task%20in%20map%20production%2C%20which%20needs%20to%20be%20automated%20since%20it%20is%20tedious%20and%20requires%20a%20significant%20amount%20of%20manual%20work.%20In%20this%20paper%2C%20we%20identify%20five%20cartographic%20labeling%20situations%20that%20present%20challenges%20by%20causing%20intensive%20manual%20work%20in%20map%20production%20of%20city%20wayfinding%20maps%2C%20e.g.%2C%20label%20placement%20in%20high%20density%20areas%2C%20utilizing%20true%20label%20geometries%20in%20automated%20methods%2C%20and%20creating%20a%20good%20relationship%20between%20text%20labels%20and%20icons.%20We%20evaluate%20these%20challenges%20in%20an%20open%20source%20map%20labeling%20tool%20%28QGIS%29%2C%20provide%20results%20from%20a%20preliminary%20study%2C%20and%20discuss%20if%20there%20are%20other%20techniques%20that%20could%20be%20applicable%20to%20solving%20these%20challenges.%20These%20techniques%20are%20based%20on%20quantified%20cartographic%20rules%20or%20on%20machine%20learning.%20We%20focus%20on%20deep%20learning%20for%20which%20we%20provide%20several%20examples%20of%20techniques%20from%20other%20application%20domains%20that%20might%20have%20a%20potential%20in%20map%20label%20placement.%20The%20aim%20of%20the%20paper%20is%20to%20explore%20those%20techniques%20and%20to%20recommend%20future%20practical%20studies%20for%20each%20of%20the%20identified%20five%20challenges%20in%20map%20production.%20We%20believe%20that%20targeting%20the%20revealed%20challenges%20using%20the%20proposed%20solutions%20will%20significantly%20raise%20the%20automation%20level%20for%20producing%20city%20wayfinding%20maps%2C%20thus%2C%20having%20a%20real%2C%20measurable%20impact%20on%20production%20time%20and%20costs.%22%2C%22date%22%3A%222022-05-25%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2Fs41651-022-00115-z%22%2C%22ISSN%22%3A%222509-8829%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs41651-022-00115-z%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A58%3A33Z%22%7D%7D%2C%7B%22key%22%3A%223F65FJSQ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Lan%20et%20al.%22%2C%22parsedDate%22%3A%222022-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLan%2C%20T.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F1%5C%2F36%26%23039%3B%26gt%3BAn%20ANNs-Based%20Method%20for%20Automated%20Labelling%20of%20Schematic%20Metro%20Maps%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22An%20ANNs-Based%20Method%20for%20Automated%20Labelling%20of%20Schematic%20Metro%20Maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tian%22%2C%22lastName%22%3A%22Lan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhilin%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jicheng%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chengyin%22%2C%22lastName%22%3A%22Gong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peng%22%2C%22lastName%22%3A%22Ti%22%7D%5D%2C%22abstractNote%22%3A%22Schematic%20maps%20are%20popular%20for%20representing%20transport%20networks.%20In%20the%20last%20two%20decades%2C%20some%20researchers%20have%20been%20working%20toward%20automated%20generation%20of%20network%20layouts%20%28i.e.%2C%20the%20network%20geometry%20of%20schematic%20maps%29%2C%20while%20automated%20labelling%20of%20schematic%20maps%20is%20not%20well%20considered.%20The%20descriptive-statistics-based%20labelling%20method%2C%20which%20models%20the%20labelling%20space%20by%20defining%20various%20station-based%20line%20relations%20in%20advance%2C%20has%20been%20specially%20developed%20for%20schematic%20maps.%20However%2C%20if%20a%20certain%20station-based%20line%20relation%20is%20not%20predefined%20in%20the%20database%2C%20this%20method%20may%20not%20be%20able%20to%20infer%20suitable%20labelling%20positions%20under%20this%20relation.%20It%20is%20noted%20that%20artificial%20neural%20networks%20%28ANNs%29%20have%20the%20ability%20to%20infer%20unseen%20relations.%20In%20this%20study%2C%20we%20aim%20to%20develop%20an%20ANNs-based%20method%20for%20the%20labelling%20of%20schematic%20metro%20maps.%20Samples%20are%20first%20extracted%20from%20representative%20schematic%20metro%20maps%2C%20and%20then%20they%20are%20employed%20to%20train%20and%20test%20ANNs%20models.%20Five%20types%20of%20attributes%20%28e.g.%2C%20station-based%20line%20relations%29%20are%20used%20as%20inputs%2C%20and%20two%20types%20of%20attributes%20%28i.e.%2C%20directions%20and%20positions%20of%20labels%29%20are%20used%20as%20outputs.%20Experiments%20show%20that%20this%20ANNs-based%20method%20can%20generate%20effective%20and%20satisfactory%20labelling%20results%20in%20the%20testing%20cases.%20Such%20a%20method%20has%20potential%20to%20be%20extended%20for%20the%20labelling%20of%20other%20transport%20networks.%22%2C%22date%22%3A%222022%5C%2F1%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi11010036%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F1%5C%2F36%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A04%3A56Z%22%7D%7D%2C%7B%22key%22%3A%225GY6DG23%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20et%20al.%22%2C%22parsedDate%22%3A%222020-08-24%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLi%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.int-arch-photogramm-remote-sens-spatial-inf-sci.net%5C%2FXLIII-B4-2020%5C%2F117%5C%2F2020%5C%2F%26%23039%3B%26gt%3BAutomatic%20label%20placement%20of%20area-features%20using%20deep%20learning%26lt%3B%5C%2Fa%26gt%3B.%202020%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Automatic%20label%20placement%20of%20area-features%20using%20deep%20learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Y.%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22M.%22%2C%22lastName%22%3A%22Sakamoto%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22T.%22%2C%22lastName%22%3A%22Shinohara%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22T.%22%2C%22lastName%22%3A%22Satoh%22%7D%5D%2C%22abstractNote%22%3A%22Label%20placement%20is%20one%20of%20the%20most%20essential%20tasks%20in%20the%20fields%20of%20cartography%20and%20geographic%20information%20systems.%20Numerous%20studies%20have%20been%20conducted%20on%20the%20automatic%20label%20placement%20for%20the%20past%20few%20decades.%20In%20this%20study%2C%20we%20focus%20on%20automatic%20label%20placement%20of%20area-feature%2C%20which%20has%20been%20relatively%20less%20studied%20than%20that%20of%20point-feature%20and%20line-feature.%20Most%20of%20the%20existing%20approaches%20have%20adopted%20a%20rule-based%20algorithm%2C%20and%20there%20are%20limitations%20in%20expressing%20the%20characteristics%20of%20label%20placement%20for%20area-features%20of%20various%20shapes%20utilizing%20handcrafted%20rules%2C%20criteria%2C%20objective%20functions%2C%20etc.%20Hence%2C%20we%20propose%20a%20novel%20approach%20for%20automatic%20label%20placement%20of%20area-feature%20based%20on%20deep%20learning.%20The%20aim%20of%20the%20proposed%20approach%20is%20to%20obtain%20the%20complex%20and%20implicit%20characteristics%20of%20area-feature%20label%20placement%20by%20manual%20operation%20directly%20and%20automatically%20from%20training%20data.%20First%2C%20the%20area-features%20with%20vector%20format%20are%20converted%20into%20a%20binary%20image.%20Then%20a%20key-point%20detection%20model%2C%20which%20simultaneously%20detect%20and%20localize%20specific%20key-points%20from%20an%20image%2C%20is%20applied%20to%20the%20binary%20image%20to%20estimate%20the%20candidate%20positions%20of%20labels.%20Finally%2C%20the%20final%20label%20placement%20positions%20for%20each%20area-feature%20are%20determined%20via%20simple%20post-process.%20To%20evaluate%20the%20proposed%20approach%2C%20the%20experiments%20with%20cadastral%20data%20were%20conducted.%20The%20experimental%20results%20show%20that%20the%20ratios%20of%20the%20estimation%20errors%20within%201.2%20m%20%28corresponding%20to%20one%20pixel%20of%20the%20input%20image%29%20were%2092.6%25%20and%2094.5%25%20in%20the%20center%20and%20upper-left%20placement%20style%2C%20respectively.%20It%20implies%20that%20the%20proposed%20approach%20could%20place%20the%20labels%20for%20area-features%20automatically%20and%20accurately.%22%2C%22date%22%3A%222020-08-24%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.5194%5C%2Fisprs-archives-XLIII-B4-2020-117-2020%22%2C%22ISSN%22%3A%222194-9034%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.int-arch-photogramm-remote-sens-spatial-inf-sci.net%5C%2FXLIII-B4-2020%5C%2F117%5C%2F2020%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A05%3A53Z%22%7D%7D%5D%7D
Zhang, Y. et al. Automatic Annotation of Map Point Features Based on Deep Learning ResNet Models. 2025
Oucheikh, R. et al. A feasibility study of applying generative deep learning models for map labeling. 2024
Lan, T. et al. An ANNs-Based Method for Automated Labelling of Schematic Metro Maps. 2022
Li, Y. et al. Automatic label placement of area-features using deep learning. 2020
Georeferencing and Map Registration
5447768
map registration
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22AUMIVHM6%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Cui%20et%20al.%22%2C%22parsedDate%22%3A%222025-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BCui%2C%20L.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2076-3417%5C%2F15%5C%2F5%5C%2F2383%26%23039%3B%26gt%3BA%20Transformer-Based%20Approach%20for%20Efficient%20Geometric%20Feature%20Extraction%20from%20Vector%20Shape%20Data%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20Transformer-Based%20Approach%20for%20Efficient%20Geometric%20Feature%20Extraction%20from%20Vector%20Shape%20Data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Longfei%22%2C%22lastName%22%3A%22Cui%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xinyu%22%2C%22lastName%22%3A%22Niu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haizhong%22%2C%22lastName%22%3A%22Qian%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiao%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Junkui%22%2C%22lastName%22%3A%22Xu%22%7D%5D%2C%22abstractNote%22%3A%22The%20extraction%20of%20shape%20features%20from%20vector%20elements%20is%20essential%20in%20cartography%20and%20geographic%20information%20science%2C%20supporting%20a%20range%20of%20intelligent%20processing%20tasks.%20Traditional%20methods%20rely%20on%20different%20machine%20learning%20algorithms%20tailored%20to%20specific%20types%20of%20line%20and%20polygon%20elements%2C%20limiting%20their%20general%20applicability.%20This%20study%20introduces%20a%20novel%20approach%20called%20%5Cu201cPre-Trained%20Shape%20Feature%20Representations%20from%20Transformers%20%28PSRT%29%5Cu201d%2C%20which%20utilizes%20transformer%20encoders%20designed%20with%20three%20self-supervised%20pre-training%20tasks%3A%20coordinate%20masking%20prediction%2C%20coordinate%20offset%20correction%2C%20and%20coordinate%20sequence%20rearrangement.%20This%20approach%20enables%20the%20extraction%20of%20general%20shape%20features%20applicable%20to%20both%20line%20and%20polygon%20elements%2C%20generating%20high-dimensional%20embedded%20feature%20vectors.%20These%20vectors%20facilitate%20downstream%20tasks%20like%20shape%20classification%2C%20pattern%20recognition%2C%20and%20cartographic%20generalization.%20Our%20experimental%20results%20show%20that%20PSRT%20can%20extract%20vector%20shape%20features%20effectively%20without%20needing%20labeled%20samples%20and%20is%20adaptable%20to%20various%20types%20of%20vector%20features.%20Compared%20to%20the%20methods%20without%20pre-training%2C%20PSRT%20enhances%20training%20efficiency%20by%20over%20five%20times%20and%20improves%20accuracy%20by%205%5Cu201310%25%20in%20tasks%20such%20as%20line%20element%20matching%20and%20polygon%20shape%20classification.%20This%20innovative%20approach%20offers%20a%20more%20unified%2C%20efficient%20solution%20for%20processing%20vector%20shape%20data%20across%20different%20applications.%22%2C%22date%22%3A%222025%5C%2F1%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fapp15052383%22%2C%22ISSN%22%3A%222076-3417%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2076-3417%5C%2F15%5C%2F5%5C%2F2383%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-19T18%3A56%3A59Z%22%7D%7D%2C%7B%22key%22%3A%22HNLVT5Z2%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Qin%20et%20al.%22%2C%22parsedDate%22%3A%222025-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BQin%2C%20Z.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2076-3417%5C%2F15%5C%2F3%5C%2F1472%26%23039%3B%26gt%3BA%20Registration%20Method%20for%20Historical%20Maps%20Based%20on%20Self-Supervised%20Feature%20Matching%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20Registration%20Method%20for%20Historical%20Maps%20Based%20on%20Self-Supervised%20Feature%20Matching%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zikang%22%2C%22lastName%22%3A%22Qin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yumin%22%2C%22lastName%22%3A%22Feng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gang%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qing%22%2C%22lastName%22%3A%22Dong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tianxin%22%2C%22lastName%22%3A%22Han%22%7D%5D%2C%22abstractNote%22%3A%22Comparing%20historical%20map%20images%20of%20the%20same%20region%20from%20different%20periods%20is%20an%20effective%20method%20for%20studying%20urban%20history%20and%20planning.%20Image%20registration%20techniques%20in%20the%20field%20of%20computer%20vision%20can%20be%20applied%20to%20this%20task.%20However%2C%20historical%20map%20registration%20faces%20unique%20challenges%2C%20including%20insufficient%20training%20data%2C%20variations%20in%20image%20sizes%2C%20and%20unavailable%20texture%20features.%20To%20address%20these%20challenges%2C%20we%20constructed%20a%20dedicated%20dataset%20of%20over%20100%20scanned%20historical%20maps%2C%20including%20both%20raw%20and%20preprocessed%20segmented%20images.%20We%20then%20developed%20an%20enhanced%20SuperGlue-based%20registration%20framework%2C%20optimized%20for%20the%20specific%20obstacles%20posed%20by%20historical%20maps%2C%20such%20as%20low%20texture%20and%20large%20image%20size.%20Additionally%2C%20we%20proposed%20a%20self-supervised%20fine-tuning%20feature%20extraction%20algorithm%20and%20a%20Transformer-based%20architecture%20utilizing%20graph%20attention%20mechanisms%20to%20refine%20feature%20descriptors%20and%20enhance%20feature%20matching%20performance.%20Experimental%20results%20indicate%20that%20our%20solution%20achieves%20superior%20performance%20compared%20to%20existing%20models%2C%20with%20RMSE%20reduced%20by%20up%20to%2020%25%2C%20ROCC%20improved%20by%20up%20to%2010%25%2C%20and%20processing%20time%20shortened%20by%20at%20least%2015%25.%22%2C%22date%22%3A%222025%5C%2F1%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fapp15031472%22%2C%22ISSN%22%3A%222076-3417%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2076-3417%5C%2F15%5C%2F3%5C%2F1472%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-26T19%3A42%3A13Z%22%7D%7D%2C%7B%22key%22%3A%22527SHCE4%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wu%20et%20al.%22%2C%22parsedDate%22%3A%222022-11-14%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWu%2C%20S.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3557918.3565871%26%23039%3B%26gt%3BUnsupervised%20historical%20map%20registration%20by%20a%20deformation%20neural%20network%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Unsupervised%20historical%20map%20registration%20by%20a%20deformation%20neural%20network%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sidi%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Raimund%22%2C%22lastName%22%3A%22Schn%5Cu00fcrer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Magnus%22%2C%22lastName%22%3A%22Heitzler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22Image%20registration%20that%20aligns%20multi-temporal%20or%20multi-source%20images%20is%20vital%20for%20tasks%20like%20change%20detection%20and%20image%20fusion.%20Thanks%20to%20the%20advance%20and%20large-scale%20practice%20of%20modern%20surveying%20methods%2C%20multi-temporal%20historical%20maps%20can%20be%20unlocked%20and%20combined%20to%20trace%20object%20changes%20in%20the%20past%2C%20potentially%20supporting%20research%20in%20environmental%20science%2C%20ecology%20and%20urban%20planning%2C%20etc.%20Even%20when%20maps%20are%20geo-referenced%2C%20the%20contained%20geographical%20features%20can%20be%20misaligned%20due%20to%20surveying%2C%20painting%2C%20map%20generalization%2C%20and%20production%20bias.%20In%20our%20work%2C%20we%20adapt%20an%20end-to-end%20unsupervised%20deformation%20network%20that%20couples%20rigid%20and%20non-rigid%20transformations%20to%20align%20scanned%20historical%20map%20sheets%20at%20different%20time%20stamps.%20To%20the%20best%20of%20our%20knowledge%2C%20we%20are%20the%20first%20to%20use%20unsupervised%20deep%20learning%20to%20register%20map%20images.%20We%20address%20the%20sparsity%20of%20map%20features%20by%20introducing%20a%20loss%20based%20on%20distance%20fields.%20When%20aligning%20the%20displaced%20landmark%20locations%20by%20our%20proposed%20method%2C%20the%20results%20are%20promising%20both%20quantitatively%20and%20qualitatively.%20The%20generated%20smooth%20deformation%20grid%20can%20be%20applied%20to%20vector%20features%20directly%20to%20align%20them%20from%20the%20source%20map%20sheet%20to%20the%20target%20map%20sheet.%22%2C%22date%22%3A%22November%2014%2C%202022%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%205th%20ACM%20SIGSPATIAL%20International%20Workshop%20on%20AI%20for%20Geographic%20Knowledge%20Discovery%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3557918.3565871%22%2C%22ISBN%22%3A%22978-1-4503-9532-8%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3557918.3565871%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A31%3A10Z%22%7D%7D%2C%7B%22key%22%3A%22JPMHK8QY%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Feng%20et%20al.%22%2C%22parsedDate%22%3A%222022-07%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BFeng%2C%20J.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9288879%26%23039%3B%26gt%3BDeepMM%3A%20Deep%20Learning%20Based%20Map%20Matching%20With%20Data%20Augmentation%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22DeepMM%3A%20Deep%20Learning%20Based%20Map%20Matching%20With%20Data%20Augmentation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jie%22%2C%22lastName%22%3A%22Feng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yong%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kai%22%2C%22lastName%22%3A%22Zhao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhao%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tong%22%2C%22lastName%22%3A%22Xia%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jinglin%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Depeng%22%2C%22lastName%22%3A%22Jin%22%7D%5D%2C%22abstractNote%22%3A%22As%20a%20fundamental%20component%20in%20map%20service%2C%20map%20matching%20is%20of%20great%20importance%20for%20many%20trajectory-based%20applications%2C%20e.g.%2C%20route%20optimization%2C%20traffic%20scheduling%2C%20and%20fleet%20management.%20In%20practice%2C%20Hidden%20Markov%20Model%20and%20its%20variants%20are%20widely%20used%20to%20provide%20accurate%20and%20efficient%20map%20matching%20service.%20However%2C%20HMM-based%20methods%20fail%20to%20utilize%20the%20knowledge%20%28e.g.%2C%20the%20mobility%20pattern%29%20of%20enormous%20trajectory%20big%20data%2C%20which%20are%20useful%20for%20intelligent%20map%20matching.%20Furthermore%2C%20with%20many%20following-up%20works%2C%20they%20are%20still%20easily%20influenced%20by%20the%20common%20noisy%20and%20sparse%20records%20in%20the%20reality.%20In%20this%20paper%2C%20we%20revisit%20the%20map%20matching%20task%20from%20the%20data%20perspective%20and%20propose%20to%20utilize%20the%20great%20power%20of%20massive%20data%20and%20deep%20learning%20to%20solve%20these%20problems.%20Based%20on%20the%20seq2seq%20learning%20framework%2C%20we%20build%20a%20trajectory2road%20model%20with%20attention%20mechanism%20to%20map%20the%20sparse%20and%20noisy%20trajectory%20into%20the%20accurate%20road%20network.%20Different%20from%20previous%20algorithms%2C%20our%20deep%20learning%20based%20model%20complete%20the%20map%20matching%20in%20the%20latent%20space%2C%20which%20provides%20the%20high%20tolerance%20to%20the%20noisy%20trajectory%20and%20also%20enhances%20the%20matching%20with%20the%20knowledge%20of%20mobility%20pattern.%20Extensive%20experiments%20demonstrate%20that%20the%20proposed%20model%20outperforms%20the%20widely%20used%20HMM-based%20methods%20by%20more%20than%2010%20percent%20%28absolute%20accuracy%29%20in%20various%20situations%20especially%20the%20noisy%20and%20sparse%20settings.%22%2C%22date%22%3A%222022-07%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FTMC.2020.3043500%22%2C%22ISSN%22%3A%221558-0660%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9288879%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A20%3A35Z%22%7D%7D%2C%7B%22key%22%3A%22WV6P884E%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Duan%20et%20al.%22%2C%22parsedDate%22%3A%222021-12%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BDuan%2C%20W.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9671657%26%23039%3B%26gt%3BA%20Label%20Correction%20Algorithm%20Using%20Prior%20Information%20for%20Automatic%20and%20Accurate%20Geospatial%20Object%20Recognition%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22A%20Label%20Correction%20Algorithm%20Using%20Prior%20Information%20for%20Automatic%20and%20Accurate%20Geospatial%20Object%20Recognition%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weiwei%22%2C%22lastName%22%3A%22Duan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yao-Yi%22%2C%22lastName%22%3A%22Chiang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stefan%22%2C%22lastName%22%3A%22Leyk%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Johannes%20H.%22%2C%22lastName%22%3A%22Uhl%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Craig%20A.%22%2C%22lastName%22%3A%22Knoblock%22%7D%5D%2C%22abstractNote%22%3A%22Thousands%20of%20scanned%20historical%20topographic%20maps%20contain%20valuable%20information%20covering%20long%20periods%20of%20time%2C%20such%20as%20how%20the%20hydrography%20of%20a%20region%20has%20changed%20over%20time.%20Efficiently%20unlocking%20the%20information%20in%20these%20maps%20requires%20training%20a%20geospatial%20objects%20recognition%20system%2C%20which%20needs%20a%20large%20amount%20of%20annotated%20data.%20Overlapping%20geo-referenced%20external%20vector%20data%20with%20topographic%20maps%20according%20to%20their%20coordinates%20can%20annotate%20the%20desired%20objects%5Cu2019%20locations%20in%20the%20maps%20automatically.%20However%2C%20directly%20overlapping%20the%20two%20datasets%20causes%20misaligned%20and%20false%20annotations%20because%20the%20publication%20years%20and%20coordinate%20projection%20systems%20of%20topographic%20maps%20are%20different%20from%20the%20external%20vector%20data.%20We%20propose%20a%20label%20correction%20algorithm%2C%20which%20leverages%20the%20color%20information%20of%20maps%20and%20the%20prior%20shape%20information%20of%20the%20external%20vector%20data%20to%20reduce%20misaligned%20and%20false%20annotations.%20The%20experiments%20show%20that%20the%20precision%20of%20annotations%20from%20the%20proposed%20algorithm%20is%2010%25%20higher%20than%20the%20annotations%20from%20a%20state-of-the-art%20algorithm.%20Consequently%2C%20recognition%20results%20using%20the%20proposed%20algorithm%5Cu2019s%20annotations%20achieve%209%25%20higher%20correctness%20than%20using%20the%20annotations%20from%20the%20state-of-the-art%20algorithm.%22%2C%22date%22%3A%222021-12%22%2C%22proceedingsTitle%22%3A%222021%20IEEE%20International%20Conference%20on%20Big%20Data%22%2C%22conferenceName%22%3A%222021%20IEEE%20International%20Conference%20on%20Big%20Data%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FBigData52589.2021.9671657%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9671657%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A21%3A01Z%22%7D%7D%2C%7B%22key%22%3A%22GXTJ5CJD%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Sun%20et%20al.%22%2C%22parsedDate%22%3A%222021-10-03%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSun%2C%20K.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2020.1845702%26%23039%3B%26gt%3BAligning%20geographic%20entities%20from%20historical%20maps%20for%20building%20knowledge%20graphs%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Aligning%20geographic%20entities%20from%20historical%20maps%20for%20building%20knowledge%20graphs%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kai%22%2C%22lastName%22%3A%22Sun%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yingjie%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jia%22%2C%22lastName%22%3A%22Song%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yunqiang%22%2C%22lastName%22%3A%22Zhu%22%7D%5D%2C%22abstractNote%22%3A%22Historical%20maps%20contain%20rich%20geographic%20information%20about%20the%20past%20of%20a%20region.%20They%20are%20sometimes%20the%20only%20source%20of%20information%20before%20the%20availability%20of%20digital%20maps.%20Despite%20their%20valuable%20content%2C%20it%20is%20often%20challenging%20to%20access%20and%20use%20the%20information%20in%20historical%20maps%2C%20due%20to%20their%20forms%20of%20paper-based%20maps%20or%20scanned%20images.%20It%20is%20even%20more%20time-consuming%20and%20labor-intensive%20to%20conduct%20an%20analysis%20that%20requires%20a%20synthesis%20of%20the%20information%20from%20multiple%20historical%20maps.%20To%20facilitate%20the%20use%20of%20the%20geographic%20information%20contained%20in%20historical%20maps%2C%20one%20way%20is%20to%20build%20a%20geographic%20knowledge%20graph%20%28GKG%29%20from%20them.%20This%20paper%20proposes%20a%20general%20workflow%20for%20completing%20one%20important%20step%20of%20building%20such%20a%20GKG%2C%20namely%20aligning%20the%20same%20geographic%20entities%20from%20different%20maps.%20We%20present%20this%20workflow%20and%20the%20related%20methods%20for%20implementation%2C%20and%20systematically%20evaluate%20their%20performances%20using%20two%20different%20datasets%20of%20historical%20maps.%20The%20evaluation%20results%20show%20that%20machine%20learning%20and%20deep%20learning%20models%20for%20matching%20place%20names%20are%20sensitive%20to%20the%20thresholds%20learned%20from%20the%20training%20data%2C%20and%20a%20combination%20of%20measures%20based%20on%20string%20similarity%2C%20spatial%20distance%2C%20and%20approximate%20topological%20relation%20achieves%20the%20best%20performance%20with%20an%20average%20F-score%20of%200.89.%22%2C%22date%22%3A%222021-10-03%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2020.1845702%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2020.1845702%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-26T19%3A20%3A23Z%22%7D%7D%2C%7B%22key%22%3A%22A9VYFZP3%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Duan%20et%20al.%22%2C%22parsedDate%22%3A%222020-04-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BDuan%2C%20W.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2019.1698742%26%23039%3B%26gt%3BAutomatic%20alignment%20of%20contemporary%20vector%20data%20and%20georeferenced%20historical%20maps%20using%20reinforcement%20learning%26lt%3B%5C%2Fa%26gt%3B.%202020%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Automatic%20alignment%20of%20contemporary%20vector%20data%20and%20georeferenced%20historical%20maps%20using%20reinforcement%20learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weiwei%22%2C%22lastName%22%3A%22Duan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yao-Yi%22%2C%22lastName%22%3A%22Chiang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stefan%22%2C%22lastName%22%3A%22Leyk%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Johannes%20H.%22%2C%22lastName%22%3A%22Uhl%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Craig%20A.%22%2C%22lastName%22%3A%22Knoblock%22%7D%5D%2C%22abstractNote%22%3A%22With%20large%20amounts%20of%20digital%20map%20archives%20becoming%20available%2C%20automatically%20extracting%20information%20from%20scanned%20historical%20maps%20is%20needed%20for%20many%20domains%20that%20require%20long-term%20historical%20geographic%20data.%20Convolutional%20Neural%20Networks%20%28CNN%29%20are%20powerful%20techniques%20that%20can%20be%20used%20for%20extracting%20locations%20of%20geographic%20features%20from%20scanned%20maps%20if%20sufficient%20representative%20training%20data%20are%20available.%20Existing%20spatial%20data%20can%20provide%20the%20approximate%20locations%20of%20corresponding%20geographic%20features%20in%20historical%20maps%20and%20thus%20be%20useful%20to%20annotate%20training%20data%20automatically.%20However%2C%20the%20feature%20representations%2C%20publication%20date%2C%20production%20scales%2C%20and%20spatial%20reference%20systems%20of%20contemporary%20vector%20data%20are%20typically%20very%20different%20from%20those%20of%20historical%20maps.%20Hence%2C%20such%20auxiliary%20data%20cannot%20be%20directly%20used%20for%20annotation%20of%20the%20precise%20locations%20of%20the%20features%20of%20interest%20in%20the%20scanned%20historical%20maps.%20This%20research%20introduces%20an%20automatic%20vector-to-raster%20alignment%20algorithm%20based%20on%20reinforcement%20learning%20to%20annotate%20precise%20locations%20of%20geographic%20features%20on%20scanned%20maps.%20This%20paper%20models%20the%20alignment%20problem%20using%20the%20reinforcement%20learning%20framework%2C%20which%20enables%20informed%2C%20efficient%20searches%20for%20matching%20features%20without%20pre-processing%20steps%2C%20such%20as%20extracting%20specific%20feature%20signatures%20%28e.g.%20road%20intersections%29.%20The%20experimental%20results%20show%20that%20our%20algorithm%20can%20be%20applied%20to%20various%20features%20%28roads%2C%20water%20lines%2C%20and%20railroads%29%20and%20achieve%20high%20accuracy.%22%2C%22date%22%3A%222020-04-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2019.1698742%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2019.1698742%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A31%3A40Z%22%7D%7D%5D%7D
Qin, Z. et al. A Registration Method for Historical Maps Based on Self-Supervised Feature Matching. 2025
Wu, S. et al. Unsupervised historical map registration by a deformation neural network. 2022
Feng, J. et al. DeepMM: Deep Learning Based Map Matching With Data Augmentation. 2022
Sun, K. et al. Aligning geographic entities from historical maps for building knowledge graphs. 2021
Inpainting
5447768
inpainting
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22386NHISP%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yu%20and%20Chen%22%2C%22parsedDate%22%3A%222022-03-31%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYu%2C%20W.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2022.2055036%26%23039%3B%26gt%3BFilling%20gaps%20of%20cartographic%20polylines%20by%20using%20an%20encoder%5Cu2013decoder%20model%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Filling%20gaps%20of%20cartographic%20polylines%20by%20using%20an%20encoder%5Cu2013decoder%20model%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenhao%22%2C%22lastName%22%3A%22Yu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yujie%22%2C%22lastName%22%3A%22Chen%22%7D%5D%2C%22abstractNote%22%3A%22Geospatial%20studies%20must%20address%20spatial%20data%20quality%2C%20especially%20in%20data-driven%20research.%20An%20essential%20concern%20is%20how%20to%20fill%20spatial%20data%20gaps%20%28missing%20data%29%2C%20such%20as%20for%20cartographic%20polylines.%20Recent%20advances%20in%20deep%20learning%20have%20shown%20promise%20in%20filling%20holes%20in%20images%20with%20semantically%20plausible%20and%20context-aware%20details.%20In%20this%20paper%2C%20we%20propose%20an%20effective%20framework%20for%20vector-structured%20polyline%20completion%20using%20a%20generative%20model.%20The%20model%20is%20trained%20to%20generate%20the%20contents%20of%20missing%20polylines%20of%20different%20sizes%20and%20shapes%20conditioned%20on%20the%20contexts.%20Specifically%2C%20the%20generator%20can%20compute%20the%20content%20of%20the%20entire%20polyline%20sample%20globally%20and%20produce%20a%20plausible%20prediction%20for%20local%20gaps.%20The%20proposed%20model%20was%20applied%20to%20contour%20data%20for%20validation.%20The%20experiments%20generated%20gaps%20of%20random%20sizes%20at%20random%20locations%20along%20with%20the%20polyline%20samples.%20Qualitative%20and%20quantitative%20evaluations%20show%20that%20our%20model%20can%20fill%20missing%20points%20with%20high%20perceptual%20quality%20and%20adaptively%20handle%20a%20range%20of%20gaps.%20In%20addition%20to%20the%20simulation%20experiment%2C%20two%20case%20studies%20with%20map%20vectorization%20and%20trajectory%20filling%20illustrate%20the%20application%20prospects%20of%20our%20model.%22%2C%22date%22%3A%222022-03-31%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2022.2055036%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2022.2055036%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-21T13%3A28%3A52Z%22%7D%7D%5D%7D
Yu, W. et al. Filling gaps of cartographic polylines by using an encoder–decoder model. 2022
3D Reconstruction
5447768
3D reconstruction
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22UG5ESF9Y%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Xiao%20et%20al.%22%2C%22parsedDate%22%3A%222025-04-25%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BXiao%2C%20T.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3706598.3713467%26%23039%3B%26gt%3BSketch2Terrain%3A%20AI-Driven%20Real-Time%20Terrain%20Sketch%20Mapping%20in%20Augmented%20Reality%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Sketch2Terrain%3A%20AI-Driven%20Real-Time%20Terrain%20Sketch%20Mapping%20in%20Augmented%20Reality%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tianyi%22%2C%22lastName%22%3A%22Xiao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yizi%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sailin%22%2C%22lastName%22%3A%22Zhong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peter%22%2C%22lastName%22%3A%22Kiefer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jakub%22%2C%22lastName%22%3A%22Krukar%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kevin%20Gonyop%22%2C%22lastName%22%3A%22Kim%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Angela%22%2C%22lastName%22%3A%22Schwering%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Martin%22%2C%22lastName%22%3A%22Raubal%22%7D%5D%2C%22abstractNote%22%3A%22Sketch%20mapping%20is%20an%20effective%20technique%20to%20externalize%20and%20communicate%20spatial%20information.%20However%2C%20it%20has%20been%20limited%20to%202D%20mediums%2C%20making%20it%20difficult%20to%20represent%203D%20information%2C%20particularly%20for%20terrains%20with%20elevation%20changes.%20We%20present%20Sketch2Terrain%2C%20an%20intuitive%20generative-3D-sketch-mapping%20system%20combining%20freehand%20sketching%20with%20generative%20Artificial%20Intelligence%20that%20radically%20changes%20sketch%20map%20creation%20and%20representation%20using%20Augmented%20Reality.%20Sketch2Terrain%20empowers%20non-experts%20to%20create%20unambiguous%20sketch%20maps%20of%20natural%20environments%20and%20provides%20a%20homogeneous%20interface%20for%20researchers%20to%20collect%20data%20and%20conduct%20experiments.%20A%20between-subject%20study%20%28N%3D36%29%20revealed%20that%20generative-3D-sketch-mapping%20improved%20efficiency%20by%2038.4%25%2C%20terrain-topology%20accuracy%20by%2012.5%25%2C%20and%20landmark%20accuracy%20by%20up%20to%2012.1%25%2C%20with%20only%20a%204.7%25%20trade-off%20in%20terrain-elevation%20accuracy%20compared%20to%20freehand%203D-sketch-mapping.%20Additionally%2C%20generative-3D-sketch-mapping%20reduced%20perceived%20strain%20by%2060.5%25%20and%20stress%20by%2039.5%25%20over%202D-sketch-mapping.%20These%20findings%20underscore%20potential%20applications%20of%20generative-3D-sketch-mapping%20for%20in-depth%20understanding%20and%20accurate%20representation%20of%20vertically%20complex%20environments.%20The%20implementation%20is%20publicly%20available.%22%2C%22date%22%3A%22April%2025%2C%202025%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%202025%20CHI%20Conference%20on%20Human%20Factors%20in%20Computing%20Systems%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3706598.3713467%22%2C%22ISBN%22%3A%229798400713941%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3706598.3713467%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-05-23T15%3A22%3A05Z%22%7D%7D%2C%7B%22key%22%3A%2229Z72RGU%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Schn%5Cu00fcrer%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSchn%5Cu00fcrer%2C%20R.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2224063%26%23039%3B%26gt%3BInferring%20implicit%203D%20representations%20from%20human%20figures%20on%20pictorial%20maps%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Inferring%20implicit%203D%20representations%20from%20human%20figures%20on%20pictorial%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Raimund%22%2C%22lastName%22%3A%22Schn%5Cu00fcrer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22A.%20Cengiz%22%2C%22lastName%22%3A%22%5Cu00d6ztireli%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Magnus%22%2C%22lastName%22%3A%22Heitzler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ren%5Cu00e9%22%2C%22lastName%22%3A%22Sieber%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenz%22%2C%22lastName%22%3A%22Hurni%22%7D%5D%2C%22abstractNote%22%3A%22In%20this%20work%2C%20we%20present%20an%20automated%20workflow%20to%20bring%20human%20figures%2C%20one%20of%20the%20most%20frequently%20appearing%20entities%20on%20pictorial%20maps%2C%20to%20the%20third%20dimension.%20Our%20workflow%20is%20based%20on%20training%20data%20and%20neural%20networks%20for%20single-view%203D%20reconstruction%20of%20real%20humans%20from%20photos.%20We%20first%20let%20a%20network%20consisting%20of%20fully%20connected%20layers%20estimate%20the%20depth%20coordinate%20of%202D%20pose%20points.%20The%20gained%203D%20pose%20points%20are%20inputted%20together%20with%202D%20masks%20of%20body%20parts%20into%20a%20deep%20implicit%20surface%20network%20to%20infer%203D%20signed%20distance%20fields%20%28SDFs%29.%20By%20assembling%20all%20body%20parts%2C%20we%20derive%202D%20depth%20images%20and%20body%20part%20masks%20of%20the%20whole%20figure%20for%20different%20views%2C%20which%20are%20fed%20into%20a%20fully%20convolutional%20network%20to%20predict%20UV%20images.%20These%20UV%20images%20and%20the%20texture%20for%20the%20given%20perspective%20are%20inserted%20into%20a%20generative%20network%20to%20inpaint%20the%20textures%20for%20the%20other%20views.%20The%20textures%20are%20enhanced%20by%20a%20cartoonization%20network%20and%20facial%20details%20are%20resynthesized%20by%20an%20autoencoder.%20Finally%2C%20the%20generated%20textures%20are%20assigned%20to%20the%20inferred%20body%20parts%20in%20a%20ray%20marcher.%20We%20test%20our%20workflow%20with%2012%20pictorial%20human%20figures%20after%20having%20validated%20several%20network%20configurations.%20The%20created%203D%20models%20look%20generally%20promising%2C%20especially%20when%20considering%20the%20challenges%20of%20silhouette-based%203D%20recovery%20and%20real-time%20rendering%20of%20the%20implicit%20SDFs.%20Further%20improvement%20is%20needed%20to%20reduce%20gaps%20between%20the%20body%20parts%20and%20to%20add%20pictorial%20details%20to%20the%20textures.%20Overall%2C%20the%20constructed%20figures%20may%20be%20used%20for%20animation%20and%20storytelling%20in%20digital%203D%20maps.%22%2C%22date%22%3A%222024-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2023.2224063%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2224063%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-18T17%3A45%3A42Z%22%7D%7D%2C%7B%22key%22%3A%22E3Z9UT7R%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ran%20et%20al.%22%2C%22parsedDate%22%3A%222022-12-09%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BRan%2C%20X.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2227-7390%5C%2F10%5C%2F24%5C%2F4677%26%23039%3B%26gt%3BIntelligent%20Generation%20of%20Cross%20Sections%20Using%20a%20Conditional%20Generative%20Adversarial%20Network%20and%20Application%20to%20Regional%203D%20Geological%20Modeling%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Intelligent%20Generation%20of%20Cross%20Sections%20Using%20a%20Conditional%20Generative%20Adversarial%20Network%20and%20Application%20to%20Regional%203D%20Geological%20Modeling%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiangjin%22%2C%22lastName%22%3A%22Ran%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Linfu%22%2C%22lastName%22%3A%22Xue%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xuejia%22%2C%22lastName%22%3A%22Sang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yao%22%2C%22lastName%22%3A%22Pei%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yanyan%22%2C%22lastName%22%3A%22Zhang%22%7D%5D%2C%22abstractNote%22%3A%22The%20cross%20section%20is%20the%20basic%20data%20for%20building%203D%20geological%20models.%20It%20is%20inefficient%20to%20draw%20a%20large%20number%20of%20cross%20sections%20to%20build%20an%20accurate%20model.%20This%20paper%20reports%20the%20use%20of%20multi-source%20and%20heterogeneous%20geological%20data%2C%20such%20as%20geological%20maps%2C%20gravity%20and%20aeromagnetic%20data%2C%20by%20a%20conditional%20generative%20adversarial%20network%20%28CGAN%29%20and%20implements%20an%20intelligent%20generation%20method%20of%20cross%20sections%20to%20overcome%20the%20problem%20of%20inefficient%20modeling%20data%20based%20on%20CGAN.%20Intelligent%20generation%20of%20cross%20sections%20and%203D%20geological%20modeling%20are%20carried%20out%20in%20three%20different%20areas%20in%20Liaoning%20Province.%20The%20results%20show%20that%3A%20%28a%29%20the%20accuracy%20of%20the%20proposed%20method%20is%20higher%20than%20the%20GAN%20and%20Variational%20AutoEncoder%20%28VAE%29%20models%2C%20achieving%2087%25%2C%2045%25%20and%2068%25%2C%20respectively%3B%20%28b%29%20the%203D%20geological%20model%20constructed%20by%20the%20generated%20cross%20sections%20in%20our%20study%20is%20consistent%20with%20manual%20creation%20in%20terms%20of%20stratum%20continuity%20and%20thickness.%20This%20study%20suggests%20that%20the%20proposed%20method%20is%20significant%20for%20surmounting%20the%20difficulty%20in%20data%20processing%20involved%20in%20regional%203D%20geological%20modeling.%22%2C%22date%22%3A%222022-12-09%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fmath10244677%22%2C%22ISSN%22%3A%222227-7390%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2227-7390%5C%2F10%5C%2F24%5C%2F4677%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-10-17T18%3A06%3A42Z%22%7D%7D%5D%7D
Xiao, T. et al. Sketch2Terrain: AI-Driven Real-Time Terrain Sketch Mapping in Augmented Reality. 2025
Schnürer, R. et al. Inferring implicit 3D representations from human figures on pictorial maps. 2024
Geolocalisation
5447768
geolocalisation, addresses
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%227QIN65CY%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Qian%20et%20al.%22%2C%22parsedDate%22%3A%222020-12%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BQian%2C%20C.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F9%5C%2F12%5C%2F698%26%23039%3B%26gt%3BA%20Coarse-to-Fine%20Model%20for%20Geolocating%20Chinese%20Addresses%26lt%3B%5C%2Fa%26gt%3B.%202020%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20Coarse-to-Fine%20Model%20for%20Geolocating%20Chinese%20Addresses%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chunyao%22%2C%22lastName%22%3A%22Qian%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chao%22%2C%22lastName%22%3A%22Yi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chengqi%22%2C%22lastName%22%3A%22Cheng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guoliang%22%2C%22lastName%22%3A%22Pu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiashu%22%2C%22lastName%22%3A%22Liu%22%7D%5D%2C%22abstractNote%22%3A%22Address%20geolocation%20aims%20to%20associate%20address%20texts%20to%20the%20geographic%20locations.%20In%20China%2C%20due%20to%20the%20increasing%20demand%20for%20LBS%20applications%20such%20as%20take-out%20services%20and%20express%20delivery%2C%20automatically%20geolocating%20the%20unstructured%20address%20information%20is%20the%20key%20issue%20that%20needs%20to%20be%20solved%20first.%20Recently%2C%20a%20few%20approaches%20have%20been%20proposed%20to%20automate%20the%20address%20geolocation%20by%20directly%20predicting%20geographic%20coordinates.%20However%2C%20such%20point-based%20methods%20ignore%20the%20hierarchy%20information%20in%20addresses%20which%20may%20cause%20poor%20geolocation%20performance.%20In%20this%20paper%2C%20we%20propose%20a%20hierarchical%20region-based%20approach%20for%20geolocating%20Chinese%20addresses.%20We%20model%20the%20address%20geolocation%20as%20a%20Sequence-to-Sequence%20%28Seq2Seq%29%20learning%20task%2C%20that%20is%2C%20the%20input%20sequence%20is%20a%20textual%20address%2C%20and%20the%20output%20sequence%20is%20a%20GeoSOT%20grid%20code%20which%20exactly%20represents%20multi-level%20regions%20covered%20by%20the%20address.%20A%20novel%20coarse-to-fine%20model%2C%20which%20combines%20BERT%20and%20LSTM%2C%20is%20designed%20to%20learn%20the%20task.%20The%20experimental%20results%20demonstrate%20that%20our%20model%20correctly%20understands%20the%20Chinese%20addresses%20and%20achieves%20the%20highest%20geolocation%20accuracy%20among%20all%20the%20baselines.%22%2C%22date%22%3A%222020%5C%2F12%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi9120698%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F9%5C%2F12%5C%2F698%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A34%3A19Z%22%7D%7D%5D%7D
Qian, C. et al. A Coarse-to-Fine Model for Geolocating Chinese Addresses. 2020
Geographic Entity Extraction
5447768
geographic entity extraction
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22F9JLAHL2%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Verdoodt%20et%20al.%22%2C%22parsedDate%22%3A%222025-05-08%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BVerdoodt%2C%20J.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1080%5C%2F17489725.2025.2501632%26%23039%3B%26gt%3BGeosocial%20media%26%23039%3Bs%20perspective%20on%20energy%3A%20a%20text%20classification%20approach%20using%20natural%20language%20processing%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Geosocial%20media%27s%20perspective%20on%20energy%3A%20a%20text%20classification%20approach%20using%20natural%20language%20processing%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jana%22%2C%22lastName%22%3A%22Verdoodt%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kenzo%22%2C%22lastName%22%3A%22Milleville%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haosheng%22%2C%22lastName%22%3A%22Huang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Christophe%22%2C%22lastName%22%3A%22Vandeviver%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Steven%22%2C%22lastName%22%3A%22Verstockt%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nico%20Van%20de%22%2C%22lastName%22%3A%22Weghe%22%7D%5D%2C%22abstractNote%22%3A%22This%20study%20examines%20public%20opinion%20on%20various%20energy%20sources%20through%20Twitter%20data%2C%20focusing%20on%20fossil%20fuels%2C%20nuclear%20energy%2C%20and%20renewable%20energy%20sources%20like%20solar%20and%20wind.%20Utilizing%20natural%20language%20processing%20techniques%2C%20specifically%20BERTweet%20and%20GPT%20models%2C%20the%20research%20analyses%20tweet%20categorization%20based%20on%20sentiment%20and%20stance%20related%20to%20these%20energy%20sources.%20Our%20findings%20reveal%20a%20positive%20shift%20towards%20nuclear%2C%20solar%2C%20and%20wind%20energy%2C%20contrasting%20with%20increasing%20negative%20sentiment%20towards%20fossil%20fuels.%20Notably%2C%20BERTweet%20demonstrates%20superior%20precision%20and%20recall%20in%20tweet%20categorization%20compared%20to%20GPT-3.5%20and%20GPT-4%2C%20which%20show%20potential%20bias%20against%20fossil%20fuels%2C%20misclassifying%20many%20tweets%20as%20opposing%20them.%20This%20study%20highlights%20the%20importance%20of%20social%20media%20analytics%20in%20understanding%20public%20opinions%20and%20shaping%20energy%20policy%2C%20suggesting%20that%20future%20research%20should%20broaden%20the%20scope%20of%20data%2C%20enhance%20multilingual%20capabilities%2C%20and%20improve%20data%20visualization%20to%20more%20accurately%20reflect%20global%20public%20opinion.%20The%20results%20underscore%20the%20need%20for%20balanced%20AI%20training%20to%20mitigate%20bias%20and%20more%20accurately%20capture%20diverse%20perspectives%20on%20contentious%20energy%20topics.%20The%20datasets%2C%20code%20utilized%2C%20and%20interactive%20maps%20with%20word%20clouds%20are%20available%20at%20https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.5281%5C%2Fzenodo.15020578%20and%20https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.5281%5C%2Fzenodo.15084294.%22%2C%22date%22%3A%222025-05-08%22%2C%22language%22%3A%22EN%22%2C%22DOI%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1080%5C%2F17489725.2025.2501632%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-08-09T12%3A11%3A20Z%22%7D%7D%2C%7B%22key%22%3A%22ZZRLYFBE%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Mao%20et%20al.%22%2C%22parsedDate%22%3A%222019-11-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BMao%2C%20H.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2F10.1080%5C%2F17538947.2018.1535000%26%23039%3B%26gt%3BMapping%20near-real-time%20power%20outages%20from%20social%20media%26lt%3B%5C%2Fa%26gt%3B.%202019%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Mapping%20near-real-time%20power%20outages%20from%20social%20media%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Huina%22%2C%22lastName%22%3A%22Mao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gautam%22%2C%22lastName%22%3A%22Thakur%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kevin%22%2C%22lastName%22%3A%22Sparks%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jibonananda%22%2C%22lastName%22%3A%22Sanyal%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Budhendra%22%2C%22lastName%22%3A%22Bhaduri%22%7D%5D%2C%22abstractNote%22%3A%22Social%20media%2C%20including%20Twitter%2C%20has%20become%20an%20important%20source%20for%20disaster%20response.%20Yet%20most%20studies%20focus%20on%20a%20very%20limited%20amount%20of%20geotagged%20data%20%28approximately%201%25%20of%20all%20tweets%29%20while%20discarding%20a%20rich%20body%20of%20data%20that%20contains%20location%20expressions%20in%20text.%20Location%20information%20is%20crucial%20to%20understanding%20the%20impact%20of%20disasters%2C%20including%20where%20damage%20has%20occurred%20and%20where%20the%20people%20who%20need%20help%20are%20situated.%20In%20this%20paper%2C%20we%20propose%20a%20novel%20two-stage%20machine%20learning-%20and%20deep%20learning-based%20framework%20for%20power%20outage%20detection%20from%20Twitter.%20First%2C%20we%20apply%20a%20probabilistic%20classification%20model%20using%20bag-of-ngrams%20features%20to%20find%20true%20power%20outage%20tweets.%20Second%2C%20we%20implement%20a%20new%20deep%20learning%20method%5Cu2013bidirectional%20long%20short-term%20memory%20networks%5Cu2013to%20extract%20outage%20locations%20from%20text.%20Results%20show%20a%20promising%20classification%20accuracy%20%2886%25%29%20in%20identifying%20true%20power%20outage%20tweets%2C%20and%20approximately%2020%20times%20more%20usable%20tweets%20can%20be%20located%20compared%20with%20simply%20relying%20on%20geotagged%20tweets.%20The%20method%20of%20identifying%20location%20names%20used%20in%20this%20paper%20does%20not%20require%20language-%20or%20domain-specific%20external%20resources%20such%20as%20gazetteers%20or%20handcrafted%20features%2C%20so%20it%20can%20be%20extended%20to%20other%20situational%20awareness%20analyzes%20and%20new%20applications.%22%2C%22date%22%3A%222019-11-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F17538947.2018.1535000%22%2C%22ISSN%22%3A%221753-8947%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2F10.1080%5C%2F17538947.2018.1535000%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A12%3A24Z%22%7D%7D%5D%7D
Mao, H. et al. Mapping near-real-time power outages from social media. 2019
Object and Phenomenon Detection
5447768
object phenomenon detection
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22WFM4UWPW%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Huang%20et%20al.%22%2C%22parsedDate%22%3A%222023-12-31%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BHuang%2C%20Z.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F10106049.2023.2294900%26%23039%3B%26gt%3BGraph%20neural%20network-based%20identification%20of%20ditch%20matching%20patterns%20across%20multi-scale%20geospatial%20data%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Graph%20neural%20network-based%20identification%20of%20ditch%20matching%20patterns%20across%20multi-scale%20geospatial%20data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhekun%22%2C%22lastName%22%3A%22Huang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haizhong%22%2C%22lastName%22%3A%22Qian%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiao%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Defu%22%2C%22lastName%22%3A%22Lin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Junwei%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Limin%22%2C%22lastName%22%3A%22Xie%22%7D%5D%2C%22abstractNote%22%3A%22Ditches%20are%20vital%20to%20water%20system%20data.%20To%20ease%20the%20task%20of%20matching%20multi-scale%20ditch%20data%20and%20enhance%20accuracy%20%28Acc%29%2C%20it%20is%20essential%20to%20discern%20ditch%20data%20matching%20patterns.%20Despite%20its%20importance%2C%20limited%20research%20has%20been%20conducted%20on%20ditch%20matching%20patterns%2C%20and%20the%20primary%20reasons%20for%20variations%20in%20multi-scale%20geographical%20entities%5Cu2019%20matching%20patterns%20remain%20underexplored.%20Here%2C%20we%20introduce%20a%20supervised%20graph%20neural%20network%20%28GNN%29%20method%2C%20targeting%20the%20direct%20analysis%20of%201%3A10%2C000%20ditch%20characteristics%20to%20identify%20the%20matching%20patterns%20between%201%3A10%2C000%20and%201%3A25%2C000%20datasets.%20The%20ditch%20network%20is%20depicted%20as%20a%20graph%20structure%2C%20with%20nodes%20representing%20ditch%20segments%20and%20edges%20indicating%20their%20connections.%20Subsequently%2C%20each%20ditch%20segment%5Cu2019s%20geometric%2C%20semantic%2C%20and%20topological%20attributes%20are%20computed%20as%20node%20attributes%2C%20and%20their%20matching%20patterns%20with%20the%201%3A25%2C000%20dataset%20are%20labeled%20as%20node%20annotations.%20Ditch%20segment%20matching%20patterns%20are%20then%20categorized%20using%20supervised%20learning.%20Experiments%20using%20Overijssel%2C%20Netherlands%5Cu2019%20ditch%20data%20reveal%20that%20this%20method%20achieves%20a%2097.3%25%20classification%20Acc%2C%20outperforming%20other%20GNN%20methods%20by%2025.6%5Cu201326.3%25%20and%20traditional%20machine%20learning%20methods%20by%2033%25.%20These%20findings%20underscore%20the%20efficacy%20and%20superiority%20of%20the%20proposed%20supervised%20GNN%20approach%20in%20pinpointing%20ditch%20matching%20patterns.%22%2C%22date%22%3A%222023-12-31%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1080%5C%2F10106049.2023.2294900%22%2C%22ISSN%22%3A%221010-6049%2C%201752-0762%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F10106049.2023.2294900%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-12T12%3A19%3A35Z%22%7D%7D%2C%7B%22key%22%3A%22VF9TWNTK%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Juh%5Cu00e1sz%20et%20al.%22%2C%22parsedDate%22%3A%222023%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BJuh%5Cu00e1sz%2C%20L.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fescholarship.org%5C%2Fuc%5C%2Fitem%5C%2F64h832hd%26%23039%3B%26gt%3BChatGPT%20as%20a%20mapping%20assistant%3A%20A%20novel%20method%20to%20enrich%20maps%20with%20generative%20AI%20and%20content%20derived%20from%20street-level%20photographs%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22ChatGPT%20as%20a%20mapping%20assistant%3A%20A%20novel%20method%20to%20enrich%20maps%20with%20generative%20AI%20and%20content%20derived%20from%20street-level%20photographs%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Levente%22%2C%22lastName%22%3A%22Juh%5Cu00e1sz%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peter%22%2C%22lastName%22%3A%22Mooney%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hartwig%20H.%22%2C%22lastName%22%3A%22Hochmair%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22name%22%3A%22Boyuan%20Guan%22%7D%5D%2C%22abstractNote%22%3A%22This%20paper%20explores%20the%20concept%20of%20leveraging%20generative%20AI%20as%20a%20mapping%20assistant%20for%20enhancing%20the%20efficiency%20of%20collaborative%20mapping.%20We%20present%20results%20of%20an%20experiment%20that%20combines%20multiple%20sources%20of%20volunteered%20geographic%20information%20%28VGI%29%20and%20large%20language%20models%20%28LLMs%29.%20Three%20analysts%20described%20the%20content%20of%20crowdsourced%20Mapillary%20street-level%20photographs%20taken%20along%20roads%20in%20a%20small%20test%20area%20in%20Miami%2C%20Florida.%20GPT-3.5-turbo%20was%20instructed%20to%20suggest%20the%20most%20appropriate%20tagging%20for%20each%20road%20in%20OpenStreetMap%20%28OSM%29.%20The%20study%20also%20explores%20the%20utilization%20of%20BLIP-2%2C%20a%20state-of-the-art%20multimodal%20pre-training%20method%20as%20an%20artificial%20analyst%20of%20street-level%20photographs%20in%20addition%20to%20human%20analysts.%20Results%20demonstrate%20two%20ways%20to%20effectively%20increase%20the%20accuracy%20of%20mapping%20suggestions%20without%20modifying%20the%20underlying%20AI%20models%3A%20by%20%281%29%20providing%20a%20more%20detailed%20description%20of%20source%20photographs%2C%20and%20%282%29%20combining%20prompt%20engineering%20with%20additional%20context%20%28e.g.%20location%20and%20objects%20detected%20along%20a%20road%29.%20The%20first%20approach%20increases%20the%20suggestion%20accuracy%20by%20up%20to%2029%25%2C%20and%20the%20second%20one%20by%20up%20to%2020%25.%22%2C%22date%22%3A%222023%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.25436%5C%2FE2ZW27%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fescholarship.org%5C%2Fuc%5C%2Fitem%5C%2F64h832hd%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-12T12%3A20%3A06Z%22%7D%7D%2C%7B%22key%22%3A%22DPNZVCWE%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Xie%20et%20al.%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BXie%2C%20X.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2076-3417%5C%2F12%5C%2F19%5C%2F9900%26%23039%3B%26gt%3BBuilding%20Function%20Recognition%20Using%20the%20Semi-Supervised%20Classification%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Building%20Function%20Recognition%20Using%20the%20Semi-Supervised%20Classification%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xuejing%22%2C%22lastName%22%3A%22Xie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yawen%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yongyang%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhanjun%22%2C%22lastName%22%3A%22He%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xueye%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiaoyun%22%2C%22lastName%22%3A%22Zheng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhong%22%2C%22lastName%22%3A%22Xie%22%7D%5D%2C%22abstractNote%22%3A%22The%20functional%20classification%20of%20buildings%20is%20important%20for%20creating%20and%20managing%20urban%20zones%20and%20assisting%20government%20departments.%20Building%20function%20recognition%20is%20incredibly%20valuable%20for%20wide%20applications%20ranging%20from%20the%20determination%20of%20energy%20demand.%20By%20aiming%20at%20the%20topic%20of%20urban%20function%20classification%2C%20a%20semi-supervised%20graph%20structure%20network%20combined%20unified%20message%20passing%20model%20was%20introduced.%20The%20data%20of%20this%20model%20include%20spatial%20location%20distribution%20of%20buildings%2C%20building%20characteristics%20and%20the%20information%20mined%20from%20points%20of%20interesting%20%28POIs%29.%20In%20order%20to%20extract%20the%20context%20information%2C%20each%20building%20was%20regarded%20as%20a%20graph%20node.%20Building%20characteristics%20and%20corresponding%20POIs%20information%20were%20embedded%20to%20mine%20the%20building%20function%20by%20the%20graph%20convolutional%20neural%20network.%20When%20training%20the%20model%2C%20several%20node%20labels%20in%20the%20graph%20were%20masked%2C%20and%20then%20these%20labels%20were%20predicted%20by%20the%20trained%20model%20so%20that%20this%20work%20could%20take%20full%20advantage%20of%20the%20node%20label%20and%20the%20feature%20information%20of%20all%20nodes%20in%20both%20the%20training%20and%20prediction%20stages.%20Quasi-experiments%20proved%20that%20the%20proposed%20method%20for%20building%20function%20classification%20using%20multi-source%20data%20enables%20the%20model%20to%20capture%20more%20meaningful%20information%20with%20limited%20labels%2C%20and%20it%20achieves%20better%20function%20classification%20results.%22%2C%22date%22%3A%222022%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fapp12199900%22%2C%22ISSN%22%3A%222076-3417%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2076-3417%5C%2F12%5C%2F19%5C%2F9900%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-12T13%3A02%3A55Z%22%7D%7D%2C%7B%22key%22%3A%222PDAITM5%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Valdez%20and%20Godmalin%22%2C%22parsedDate%22%3A%222021%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BValdez%2C%20D.B.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3487923.3487927%26%23039%3B%26gt%3BA%20Deep%20Learning%20Approach%20of%20Recognizing%20Natural%20Disasters%20on%20Images%20using%20Convolutional%20Neural%20Network%20and%20Transfer%20Learning%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22A%20Deep%20Learning%20Approach%20of%20Recognizing%20Natural%20Disasters%20on%20Images%20using%20Convolutional%20Neural%20Network%20and%20Transfer%20Learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Daryl%20B.%22%2C%22lastName%22%3A%22Valdez%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rey%20Anthony%20G.%22%2C%22lastName%22%3A%22Godmalin%22%7D%5D%2C%22abstractNote%22%3A%22Natural%20disasters%20are%20uncontrollable%20phenomena%20occurring%20yearly%20which%20cause%20extensive%20damage%20to%20lives%2C%20property%20and%20cause%20permanent%20damage%20to%20the%20environment.%20However%20by%2C%20using%20Deep%20Learning%2C%20real-time%20recognition%20of%20these%20disasters%20can%20help%20the%20victims%20and%20emergency%20response%20agencies%20during%20the%20onset%20of%20these%20destructive%20events.%20At%20present%2C%20there%20are%20still%20gaps%20in%20the%20literature%20regarding%20real-time%20natural%20disaster%20recognition.%20In%20this%20paper%2C%20we%20present%20a%20dataset%20for%20the%20joint%20classification%20of%20natural%20disasters%20and%20intensity.%20We%20also%20proposed%20a%20lightweight%20convolutional%20neural%20network%20with%20two%20classification%20heads%20for%20the%20two%20tasks.%20This%20study%20leveraged%20on%20transfer%20learning%20in%20training%20the%20network%20to%20recognize%20natural%20disasters%2C%20as%20well%20as%20detecting%20normal%2C%20no-disaster%20images.%20At%20the%20same%20time%2C%20it%20is%20also%20capable%20of%20recognizing%20disaster%20intensity.%20Under%20controlled%20conditions%2C%20the%20model%20showed%20promising%20results%20on%20the%20two%20classification%20tasks.%20Thus%2C%20the%20study%20proved%20that%20accurate%20recognition%20of%20natural%20disasters%20is%20possible%20using%20a%20lightweight%20model%20and%20transfer%20learning.%20We%20hope%20that%20this%20study%20would%20lead%20to%20development%20of%20monitoring%20or%20surveillance%20systems%20that%20can%20perform%20accurate%2C%20on-the-ground%2C%20and%20real-time%20recognition%20of%20natural%20disasters%20allowing%20for%20rapid%20emergency%20responses%20mitigating%20the%20loss%20of%20lives%20and%20damages%20to%20properties.%22%2C%22date%22%3A%22Dezember%209%2C%202021%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%20International%20Conference%20on%20Artificial%20Intelligence%20and%20its%20Applications%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3487923.3487927%22%2C%22ISBN%22%3A%22978-1-4503-8575-6%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3487923.3487927%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T19%3A53%3A49Z%22%7D%7D%2C%7B%22key%22%3A%22CAJR76AM%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Feng%20and%20Sester%22%2C%22parsedDate%22%3A%222018-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BFeng%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F7%5C%2F2%5C%2F39%26%23039%3B%26gt%3BExtraction%20of%20Pluvial%20Flood%20Relevant%20Volunteered%20Geographic%20Information%20%28VGI%29%20by%20Deep%20Learning%20from%20User%20Generated%20Texts%20and%20Photos%26lt%3B%5C%2Fa%26gt%3B.%202018%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Extraction%20of%20Pluvial%20Flood%20Relevant%20Volunteered%20Geographic%20Information%20%28VGI%29%20by%20Deep%20Learning%20from%20User%20Generated%20Texts%20and%20Photos%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yu%22%2C%22lastName%22%3A%22Feng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Monika%22%2C%22lastName%22%3A%22Sester%22%7D%5D%2C%22abstractNote%22%3A%22In%20recent%20years%2C%20pluvial%20floods%20caused%20by%20extreme%20rainfall%20events%20have%20occurred%20frequently.%20Especially%20in%20urban%20areas%2C%20they%20lead%20to%20serious%20damages%20and%20endanger%20the%20citizens%5Cu2019%20safety.%20Therefore%2C%20real-time%20information%20about%20such%20events%20is%20desirable.%20With%20the%20increasing%20popularity%20of%20social%20media%20platforms%2C%20such%20as%20Twitter%20or%20Instagram%2C%20information%20provided%20by%20voluntary%20users%20becomes%20a%20valuable%20source%20for%20emergency%20response.%20Many%20applications%20have%20been%20built%20for%20disaster%20detection%20and%20flood%20mapping%20using%20crowdsourcing.%20Most%20of%20the%20applications%20so%20far%20have%20merely%20used%20keyword%20filtering%20or%20classical%20language%20processing%20methods%20to%20identify%20disaster%20relevant%20documents%20based%20on%20user%20generated%20texts.%20As%20the%20reliability%20of%20social%20media%20information%20is%20often%20under%20criticism%2C%20the%20precision%20of%20information%20retrieval%20plays%20a%20significant%20role%20for%20further%20analyses.%20Thus%2C%20in%20this%20paper%2C%20high%20quality%20eyewitnesses%20of%20rainfall%20and%20flooding%20events%20are%20retrieved%20from%20social%20media%20by%20applying%20deep%20learning%20approaches%20on%20user%20generated%20texts%20and%20photos.%20Subsequently%2C%20events%20are%20detected%20through%20spatiotemporal%20clustering%20and%20visualized%20together%20with%20these%20high%20quality%20eyewitnesses%20in%20a%20web%20map%20application.%20Analyses%20and%20case%20studies%20are%20conducted%20during%20flooding%20events%20in%20Paris%2C%20London%20and%20Berlin.%22%2C%22date%22%3A%222018%5C%2F2%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi7020039%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F7%5C%2F2%5C%2F39%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T19%3A53%3A47Z%22%7D%7D%5D%7D
Xie, X. et al. Building Function Recognition Using the Semi-Supervised Classification. 2022
Remote Sensing
5447768
remote sensing
1
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22R5IP3AW4%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Metzger%20et%20al.%22%2C%22parsedDate%22%3A%222024-12-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BMetzger%2C%20N.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0034425724004097%26%23039%3B%26gt%3BHigh-resolution%20population%20maps%20derived%20from%20Sentinel-1%20and%20Sentinel-2%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22High-resolution%20population%20maps%20derived%20from%20Sentinel-1%20and%20Sentinel-2%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nando%22%2C%22lastName%22%3A%22Metzger%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rodrigo%20Caye%22%2C%22lastName%22%3A%22Daudt%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Devis%22%2C%22lastName%22%3A%22Tuia%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Konrad%22%2C%22lastName%22%3A%22Schindler%22%7D%5D%2C%22abstractNote%22%3A%22Detailed%20population%20maps%20play%20an%20important%20role%20in%20diverse%20fields%20ranging%20from%20humanitarian%20action%20to%20urban%20planning.%20Generating%20such%20maps%20in%20a%20timely%20and%20scalable%20manner%20presents%20a%20challenge%2C%20especially%20in%20data-scarce%20regions.%20To%20address%20it%20we%20have%20developed%20Popcorn%2C%20a%20population%20mapping%20method%20whose%20only%20inputs%20are%20free%2C%20globally%20available%20satellite%20images%20from%20Sentinel-1%20and%20Sentinel-2%3B%20and%20a%20small%20number%20of%20aggregate%20population%20counts%20over%20coarse%20census%20districts%20for%20calibration.%20Despite%20the%20minimal%20data%20requirements%20our%20approach%20surpasses%20the%20mapping%20accuracy%20of%20existing%20schemes%2C%20including%20several%20that%20rely%20on%20building%20footprints%20derived%20from%20high-resolution%20imagery.%20E.g.%2C%20we%20were%20able%20to%20produce%20population%20maps%20for%20Rwanda%20with%20100m%20GSD%20based%20on%20less%20than%20400%20regional%20census%20counts.%20In%20Kigali%2C%20those%20maps%20reach%20an%20R2%20score%20of%2066%25%20w.r.t.%20a%20ground%20truth%20reference%20map%2C%20with%20an%20average%20error%20of%20only%20%5Cu00b110%20inhabitants%5C%2Fha.%20Conveniently%2C%20Popcorn%20retrieves%20explicit%20maps%20of%20built-up%20areas%20and%20local%20building%20occupancy%20rates%2C%20making%20the%20mapping%20process%20interpretable%20and%20offering%20additional%20insights%2C%20for%20instance%20about%20the%20distribution%20of%20built-up%2C%20but%20unpopulated%20areas%2C%20e.g.%2C%20industrial%20warehouses.%20With%20our%20work%20we%20aim%20to%20democratize%20access%20to%20up-to-date%20and%20high-resolution%20population%20maps%2C%20recognizing%20that%20some%20regions%20faced%20with%20particularly%20strong%20population%20dynamics%20may%20lack%20the%20resources%20for%20costly%20micro-census%20campaigns.%20Project%20page%3A%20https%3A%5C%2F%5C%2Fpopcorn-population.github.io%5C%2F.%22%2C%22date%22%3A%222024-12-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.rse.2024.114383%22%2C%22ISSN%22%3A%220034-4257%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0034425724004097%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-21T12%3A22%3A49Z%22%7D%7D%2C%7B%22key%22%3A%22IKPD34UP%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20et%20al.%22%2C%22parsedDate%22%3A%222024-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLi%2C%20W.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2072-4292%5C%2F16%5C%2F5%5C%2F797%26%23039%3B%26gt%3BSegment%20Anything%20Model%20Can%20Not%20Segment%20Anything%3A%20Assessing%20AI%20Foundation%20Model%26%23039%3Bs%20Generalizability%20in%20Permafrost%20Mapping%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Segment%20Anything%20Model%20Can%20Not%20Segment%20Anything%3A%20Assessing%20AI%20Foundation%20Model%27s%20Generalizability%20in%20Permafrost%20Mapping%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenwen%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chia-Yu%22%2C%22lastName%22%3A%22Hsu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sizhe%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yezhou%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hyunho%22%2C%22lastName%22%3A%22Lee%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Anna%22%2C%22lastName%22%3A%22Liljedahl%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chandi%22%2C%22lastName%22%3A%22Witharana%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yili%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Brendan%20M.%22%2C%22lastName%22%3A%22Rogers%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Samantha%20T.%22%2C%22lastName%22%3A%22Arundel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Matthew%20B.%22%2C%22lastName%22%3A%22Jones%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kenton%22%2C%22lastName%22%3A%22McHenry%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Patricia%22%2C%22lastName%22%3A%22Solis%22%7D%5D%2C%22abstractNote%22%3A%22This%20paper%20assesses%20trending%20AI%20foundation%20models%2C%20especially%20emerging%20computer%20vision%20foundation%20models%20and%20their%20performance%20in%20natural%20landscape%20feature%20segmentation.%20While%20the%20term%20foundation%20model%20has%20quickly%20garnered%20interest%20from%20the%20geospatial%20domain%2C%20its%20definition%20remains%20vague.%20Hence%2C%20this%20paper%20will%20first%20introduce%20AI%20foundation%20models%20and%20their%20defining%20characteristics.%20Built%20upon%20the%20tremendous%20success%20achieved%20by%20Large%20Language%20Models%20%28LLMs%29%20as%20the%20foundation%20models%20for%20language%20tasks%2C%20this%20paper%20discusses%20the%20challenges%20of%20building%20foundation%20models%20for%20geospatial%20artificial%20intelligence%20%28GeoAI%29%20vision%20tasks.%20To%20evaluate%20the%20performance%20of%20large%20AI%20vision%20models%2C%20especially%20Meta%5Cu2019s%20Segment%20Anything%20Model%20%28SAM%29%2C%20we%20implemented%20different%20instance%20segmentation%20pipelines%20that%20minimize%20the%20changes%20to%20SAM%20to%20leverage%20its%20power%20as%20a%20foundation%20model.%20A%20series%20of%20prompt%20strategies%20were%20developed%20to%20test%20SAM%5Cu2019s%20performance%20regarding%20its%20theoretical%20upper%20bound%20of%20predictive%20accuracy%2C%20zero-shot%20performance%2C%20and%20domain%20adaptability%20through%20fine-tuning.%20The%20analysis%20used%20two%20permafrost%20feature%20datasets%2C%20ice-wedge%20polygons%20and%20retrogressive%20thaw%20slumps%20because%20%281%29%20these%20landform%20features%20are%20more%20challenging%20to%20segment%20than%20man-made%20features%20due%20to%20their%20complicated%20formation%20mechanisms%2C%20diverse%20forms%2C%20and%20vague%20boundaries%3B%20%282%29%20their%20presence%20and%20changes%20are%20important%20indicators%20for%20Arctic%20warming%20and%20climate%20change.%20The%20results%20show%20that%20although%20promising%2C%20SAM%20still%20has%20room%20for%20improvement%20to%20support%20AI-augmented%20terrain%20mapping.%20The%20spatial%20and%20domain%20generalizability%20of%20this%20finding%20is%20further%20validated%20using%20a%20more%20general%20dataset%20EuroCrops%20for%20agricultural%20field%20mapping.%20Finally%2C%20we%20discuss%20future%20research%20directions%20that%20strengthen%20SAM%5Cu2019s%20applicability%20in%20challenging%20geospatial%20domains.%22%2C%22date%22%3A%222024%5C%2F1%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Frs16050797%22%2C%22ISSN%22%3A%222072-4292%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2072-4292%5C%2F16%5C%2F5%5C%2F797%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-02-05T15%3A44%3A42Z%22%7D%7D%2C%7B%22key%22%3A%22CLWNEI2J%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20et%20al.%22%2C%22parsedDate%22%3A%222023-11-20%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLi%2C%20W.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3615886.3627747%26%23039%3B%26gt%3BAssessment%20of%20a%20new%20GeoAI%20foundation%20model%20for%20flood%20inundation%20mapping%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Assessment%20of%20a%20new%20GeoAI%20foundation%20model%20for%20flood%20inundation%20mapping%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenwen%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hyunho%22%2C%22lastName%22%3A%22Lee%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sizhe%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chia-Yu%22%2C%22lastName%22%3A%22Hsu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Samantha%20T.%22%2C%22lastName%22%3A%22Arundel%22%7D%5D%2C%22abstractNote%22%3A%22Vision%20foundation%20models%20are%20a%20new%20frontier%20in%20Geospatial%20Artificial%20Intelligence%20%28GeoAI%29%2C%20an%20interdisciplinary%20research%20area%20that%20applies%20and%20extends%20AI%20for%20geospatial%20problem%20solving%20and%20geographic%20knowledge%20discovery%2C%20because%20of%20their%20potential%20to%20enable%20powerful%20image%20analysis%20by%20learning%20and%20extracting%20important%20image%20features%20from%20vast%20amounts%20of%20geospatial%20data.%20This%20paper%20evaluates%20the%20performance%20of%20the%20first-of-its-kind%20geospatial%20foundation%20model%2C%20IBM-NASA%26%23039%3Bs%20Prithvi%2C%20to%20support%20a%20crucial%20geospatial%20analysis%20task%3A%20flood%20inundation%20mapping.%20This%20model%20is%20compared%20with%20convolutional%20neural%20network%20and%20vision%20transformer-based%20architectures%20in%20terms%20of%20mapping%20accuracy%20for%20flooded%20areas.%20A%20benchmark%20dataset%2C%20Sen1Floods11%2C%20is%20used%20in%20the%20experiments%2C%20and%20the%20models%26%23039%3B%20predictability%2C%20generalizability%2C%20and%20transferability%20are%20evaluated%20based%20on%20both%20a%20test%20dataset%20and%20a%20dataset%20that%20is%20completely%20unseen%20by%20the%20model.%20Results%20show%20the%20good%20transferability%20of%20the%20Prithvi%20model%2C%20highlighting%20its%20performance%20advantages%20in%20segmenting%20flooded%20areas%20in%20previously%20unseen%20regions.%20The%20findings%20also%20indicate%20areas%20for%20improvement%20for%20the%20Prithvi%20model%20in%20terms%20of%20adopting%20multi-scale%20representation%20learning%2C%20developing%20more%20end-to-end%20pipelines%20for%20high-level%20image%20analysis%20tasks%2C%20and%20offering%20more%20flexibility%20in%20terms%20of%20input%20data%20bands.%22%2C%22date%22%3A%22November%2020%2C%202023%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%206th%20ACM%20SIGSPATIAL%20International%20Workshop%20on%20AI%20for%20Geographic%20Knowledge%20Discovery%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3615886.3627747%22%2C%22ISBN%22%3A%229798400703485%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3615886.3627747%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-20T16%3A55%3A44Z%22%7D%7D%2C%7B%22key%22%3A%22ZKPT3FEE%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20et%20al.%22%2C%22parsedDate%22%3A%222023-07-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLi%2C%20W.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs10707-022-00476-z%26%23039%3B%26gt%3BGeoImageNet%3A%20a%20multi-source%20natural%20feature%20benchmark%20dataset%20for%20GeoAI%20and%20supervised%20machine%20learning%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22GeoImageNet%3A%20a%20multi-source%20natural%20feature%20benchmark%20dataset%20for%20GeoAI%20and%20supervised%20machine%20learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenwen%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sizhe%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Samantha%20T.%22%2C%22lastName%22%3A%22Arundel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chia-Yu%22%2C%22lastName%22%3A%22Hsu%22%7D%5D%2C%22abstractNote%22%3A%22The%20field%20of%20GeoAI%20or%20Geospatial%20Artificial%20Intelligence%20has%20undergone%20rapid%20development%5Cu00a0since%202017.%20It%20has%20been%20widely%20applied%20to%20address%20environmental%20and%20social%20science%20problems%2C%20from%20understanding%20climate%20change%20to%20tracking%20the%20spread%20of%20infectious%20disease.%20A%20foundational%20task%20in%20advancing%20GeoAI%20research%20is%20the%20creation%20of%20open%2C%20benchmark%20datasets%20to%20train%20and%20evaluate%20the%20performance%20of%20GeoAI%20models.%20While%20a%20number%20of%20datasets%20have%20been%20published%2C%20very%20few%20have%20centered%20on%20the%20natural%20terrain%20and%20its%20landforms.%20To%20bridge%20this%20gulf%2C%20this%20paper%20introduces%20a%20first-of-its-kind%20benchmark%20dataset%2C%20GeoImageNet%2C%20which%20supports%20natural%20feature%20detection%20in%20a%20supervised%20machine-learning%20paradigm.%20A%20distinctive%20feature%20of%20this%20dataset%20is%20the%20fusion%20of%20multi-source%20data%2C%20including%20both%20remote%20sensing%20imagery%20and%20DEM%20in%20depicting%20spatial%20objects%20of%20interest.%20This%20multi-source%20dataset%20allows%20a%20GeoAI%20model%20to%20extract%20rich%20spatio-contextual%20information%20to%20gain%20stronger%20confidence%20in%20high-precision%20object%20detection%20and%20recognition.%20The%20image%20dataset%20is%20tested%20with%20a%20multi-source%20GeoAI%20extension%20against%20two%20well-known%20object%20detection%20models%2C%20Faster-RCNN%20and%20RetinaNet.%20The%20results%20demonstrate%20the%20robustness%20of%20the%20dataset%20in%20aiding%20GeoAI%20models%20to%20achieve%20convergence%20and%20the%20superiority%20of%20multi-source%20data%20in%20yielding%20much%20higher%20prediction%20accuracy%20than%20the%20commonly%20used%20single%20data%20source.%22%2C%22date%22%3A%222023-07-01%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2Fs10707-022-00476-z%22%2C%22ISSN%22%3A%221573-7624%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs10707-022-00476-z%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-20T16%3A57%3A51Z%22%7D%7D%2C%7B%22key%22%3A%22WDK7335J%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Hsu%20and%20Li%22%2C%22parsedDate%22%3A%222023-05-04%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BHsu%2C%20C.-Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2023.2191256%26%23039%3B%26gt%3BExplainable%20GeoAI%3A%20can%20saliency%20maps%20help%20interpret%20artificial%20intelligence%26%23039%3Bs%20learning%20process%3F%20An%20empirical%20study%20on%20natural%20feature%20detection%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Explainable%20GeoAI%3A%20can%20saliency%20maps%20help%20interpret%20artificial%20intelligence%27s%20learning%20process%3F%20An%20empirical%20study%20on%20natural%20feature%20detection%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chia-Yu%22%2C%22lastName%22%3A%22Hsu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenwen%22%2C%22lastName%22%3A%22Li%22%7D%5D%2C%22abstractNote%22%3A%22Improving%20the%20interpretability%20of%20geospatial%20artificial%20intelligence%20%28GeoAI%29%20models%20has%20become%20critically%20important%20to%20open%20the%20%5Cu2018black%20box%5Cu2019%20of%20complex%20AI%20models%2C%20such%20as%20deep%20learning.%20This%20paper%20compares%20popular%20saliency%20map%20generation%20techniques%20and%20their%20strengths%20and%20weaknesses%20in%20interpreting%20GeoAI%20and%20deep%20learning%20models%5Cu2019%20reasoning%20behaviors%2C%20particularly%20when%20applied%20to%20geospatial%20analysis%20and%20image%20processing%20tasks.%20We%20surveyed%20two%20broad%20classes%20of%20model%20explanation%20methods%3A%20perturbation-based%20and%20gradient-based%20methods.%20The%20former%20identifies%20important%20image%20areas%2C%20which%20help%20machines%20make%20predictions%20by%20modifying%20a%20localized%20area%20of%20the%20input%20image.%20The%20latter%20evaluates%20the%20contribution%20of%20every%20single%20pixel%20of%20the%20input%20image%20to%20the%20model%5Cu2019s%20prediction%20results%20through%20gradient%20backpropagation.%20In%20this%20study%2C%20three%20algorithms%5Cu2014the%20occlusion%20method%2C%20the%20integrated%20gradients%20method%2C%20and%20the%20class%20activation%20map%20method%5Cu2014are%20examined%20for%20a%20natural%20feature%20detection%20task%20using%20deep%20learning.%20The%20algorithms%5Cu2019%20strengths%20and%20weaknesses%20are%20discussed%2C%20and%20the%20consistency%20between%20model-learned%20and%20human-understandable%20concepts%20for%20object%20recognition%20is%20also%20compared.%20The%20experiments%20used%20two%20GeoAI-ready%20datasets%20to%20demonstrate%20the%20generalizability%20of%20the%20research%20findings.%22%2C%22date%22%3A%222023-05-04%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2023.2191256%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2023.2191256%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-20T16%3A59%3A17Z%22%7D%7D%2C%7B%22key%22%3A%22RK43H9IC%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20et%20al.%22%2C%22parsedDate%22%3A%222021-11-10%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLi%2C%20W.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F24694452.2021.1877527%26%23039%3B%26gt%3BTobler%26%23039%3Bs%20First%20Law%20in%20GeoAI%3A%20A%20Spatially%20Explicit%20Deep%20Learning%20Model%20for%20Terrain%20Feature%20Detection%20under%20Weak%20Supervision%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Tobler%27s%20First%20Law%20in%20GeoAI%3A%20A%20Spatially%20Explicit%20Deep%20Learning%20Model%20for%20Terrain%20Feature%20Detection%20under%20Weak%20Supervision%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenwen%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chia-Yu%22%2C%22lastName%22%3A%22Hsu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Maosheng%22%2C%22lastName%22%3A%22Hu%22%7D%5D%2C%22abstractNote%22%3A%22Recent%20interest%20in%20geospatial%20artificial%20intelligence%20%28GeoAI%29%20has%20fostered%20a%20wide%20range%20of%20applications%20using%20artificial%20intelligence%20%28AI%29%2C%20especially%20deep%20learning%20for%20geospatial%20problem%20solving.%20Major%20challenges%2C%20however%2C%20such%20as%20a%20lack%20of%20training%20data%20and%20ignorance%20of%20spatial%20principles%20and%20spatial%20effects%20in%20AI%20model%20design%20remain%2C%20significantly%20hindering%20the%20in-depth%20integration%20of%20AI%20with%20geospatial%20research.%20This%20article%20reports%20our%20work%20in%20developing%20a%20cutting-edge%20deep%20learning%20model%20that%20enables%20object%20detection%2C%20especially%20of%20natural%20features%2C%20in%20a%20weakly%20supervised%20manner.%20Our%20work%20has%20made%20three%20innovative%20contributions%3A%20First%2C%20we%20present%20a%20novel%20method%20of%20object%20detection%20using%20only%20weak%20labels.%20This%20is%20achieved%20by%20developing%20a%20spatially%20explicit%20model%20according%20to%20Tobler%5Cu2019s%20first%20law%20of%20geography%20to%20enable%20weakly%20supervised%20object%20detection.%20Second%2C%20we%20integrate%20the%20idea%20of%20an%20attention%20map%20into%20the%20deep%20learning%5Cu2013based%20object%20detection%20pipeline%20and%20develop%20a%20multistage%20training%20strategy%20to%20further%20boost%20detection%20performance.%20Third%2C%20we%20have%20successfully%20applied%20this%20model%20for%20the%20automated%20detection%20of%20Mars%20impact%20craters%2C%20the%20inspection%20of%20which%20often%20involved%20tremendous%20manual%20work%20prior%20to%20our%20solution.%20Our%20model%20is%20generalizable%20for%20detecting%20both%20natural%20and%20man-made%20features%20on%20the%20surface%20of%20the%20Earth%20and%20other%20planets.%20This%20research%20has%20made%20a%20major%20contribution%20to%20the%20enrichment%20of%20the%20theoretical%20and%20methodological%20body%20of%20knowledge%20of%20GeoAI.%22%2C%22date%22%3A%222021-11-10%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F24694452.2021.1877527%22%2C%22ISSN%22%3A%222469-4452%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F24694452.2021.1877527%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-20T17%3A14%3A54Z%22%7D%7D%2C%7B%22key%22%3A%22H864CVQU%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wang%20and%20Li%22%2C%22parsedDate%22%3A%222021-11-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWang%2C%20S.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0198971521001228%26%23039%3B%26gt%3BGeoAI%20in%20terrain%20analysis%3A%20Enabling%20multi-source%20deep%20learning%20and%20data%20fusion%20for%20natural%20feature%20detection%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22GeoAI%20in%20terrain%20analysis%3A%20Enabling%20multi-source%20deep%20learning%20and%20data%20fusion%20for%20natural%20feature%20detection%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sizhe%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenwen%22%2C%22lastName%22%3A%22Li%22%7D%5D%2C%22abstractNote%22%3A%22In%20this%20paper%20we%20report%20on%20a%20new%20GeoAI%20research%20method%20which%20enables%20deep%20machine%20learning%20from%20multi-source%20geospatial%20data%20for%20natural%20feature%20detection.%20In%20particular%2C%20a%20multi-source%2C%20deep%20learning-based%20object%20detection%20pipeline%20was%20developed.%20This%20pipeline%20introduces%20three%20new%20features%3A%20First%2C%20strategies%20of%20both%20data-level%20fusion%20%28i.e.%2C%20channel%20expansion%20on%20convolutional%20neural%20networks%29%20and%20feature-level%20fusion%20were%20integrated%20into%20the%20object%20detection%20model%20to%20allow%20simultaneous%20machine%20learning%20from%20multi-source%20data%2C%20including%20remote%20sensing%20imagery%20and%20Digital%20Elevation%20Model%20%28DEM%29%20data.%20Second%2C%20a%20new%20data%20fusion%20strategy%20was%20developed%20to%20blend%20DEM%20data%20and%20its%20derivatives%20to%20create%20a%20new%2C%20fused%20data%20source%20with%20enriched%20information%20content%20and%20image%20features.%20The%20model%20has%20also%20enabled%20deep%20learning%20by%20combining%20both%20the%20proposed%20data%20fusion%20and%20feature-level%20fusion%20strategies%20to%20yield%20a%20much-improved%20detection%20result.%20Third%2C%20two%20different%20sets%20of%20data%20augmentation%20techniques%20were%20applied%20to%20the%20multi-source%20training%20data%20to%20further%20improve%20the%20model%20performance.%20A%20series%20of%20experiments%20were%20conducted%20to%20verify%20the%20effectiveness%20of%20the%20proposed%20strategies%20in%20multi-source%20deep%20learning.%22%2C%22date%22%3A%222021-11-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.compenvurbsys.2021.101715%22%2C%22ISSN%22%3A%220198-9715%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0198971521001228%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-20T17%3A12%3A53Z%22%7D%7D%2C%7B%22key%22%3A%22JTNBHLJ4%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Hsu%20et%20al.%22%2C%22parsedDate%22%3A%222021-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BHsu%2C%20C.-Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2072-4292%5C%2F13%5C%2F11%5C%2F2116%26%23039%3B%26gt%3BKnowledge-Driven%20GeoAI%3A%20Integrating%20Spatial%20Knowledge%20into%20Multi-Scale%20Deep%20Learning%20for%20Mars%20Crater%20Detection%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Knowledge-Driven%20GeoAI%3A%20Integrating%20Spatial%20Knowledge%20into%20Multi-Scale%20Deep%20Learning%20for%20Mars%20Crater%20Detection%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chia-Yu%22%2C%22lastName%22%3A%22Hsu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenwen%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sizhe%22%2C%22lastName%22%3A%22Wang%22%7D%5D%2C%22abstractNote%22%3A%22This%20paper%20introduces%20a%20new%20GeoAI%20solution%20to%20support%20automated%20mapping%20of%20global%20craters%20on%20the%20Mars%20surface.%20Traditional%20crater%20detection%20algorithms%20suffer%20from%20the%20limitation%20of%20working%20only%20in%20a%20semiautomated%20or%20multi-stage%20manner%2C%20and%20most%20were%20developed%20to%20handle%20a%20specific%20dataset%20in%20a%20small%20subarea%20of%20Mars%5Cu2019%20surface%2C%20hindering%20their%20transferability%20for%20global%20crater%20detection.%20As%20an%20alternative%2C%20we%20propose%20a%20GeoAI%20solution%20based%20on%20deep%20learning%20to%20tackle%20this%20problem%20effectively.%20Three%20innovative%20features%20are%20integrated%20into%20our%20object%20detection%20pipeline%3A%20%281%29%20a%20feature%20pyramid%20network%20is%20leveraged%20to%20generate%20feature%20maps%20with%20rich%20semantics%20across%20multiple%20object%20scales%3B%20%282%29%20prior%20geospatial%20knowledge%20based%20on%20the%20Hough%20transform%20is%20integrated%20to%20enable%20more%20accurate%20localization%20of%20potential%20craters%3B%20and%20%283%29%20a%20scale-aware%20classifier%20is%20adopted%20to%20increase%20the%20prediction%20accuracy%20of%20both%20large%20and%20small%20crater%20instances.%20The%20results%20show%20that%20the%20proposed%20strategies%20bring%20a%20significant%20increase%20in%20crater%20detection%20performance%20than%20the%20popular%20Faster%20R-CNN%20model.%20The%20integration%20of%20geospatial%20domain%20knowledge%20into%20the%20data-driven%20analytics%20moves%20GeoAI%20research%20up%20to%20the%20next%20level%20to%20enable%20knowledge-driven%20GeoAI.%20This%20research%20can%20be%20applied%20to%20a%20wide%20variety%20of%20object%20detection%20and%20image%20analysis%20tasks.%22%2C%22date%22%3A%222021%5C%2F1%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Frs13112116%22%2C%22ISSN%22%3A%222072-4292%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2072-4292%5C%2F13%5C%2F11%5C%2F2116%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-20T17%3A13%3A39Z%22%7D%7D%2C%7B%22key%22%3A%22DLIEM9PV%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20and%20Hsu%22%2C%22parsedDate%22%3A%222020-04-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLi%2C%20W.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2018.1542697%26%23039%3B%26gt%3BAutomated%20terrain%20feature%20identification%20from%20remote%20sensing%20imagery%3A%20a%20deep%20learning%20approach%26lt%3B%5C%2Fa%26gt%3B.%202020%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Automated%20terrain%20feature%20identification%20from%20remote%20sensing%20imagery%3A%20a%20deep%20learning%20approach%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenwen%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chia-Yu%22%2C%22lastName%22%3A%22Hsu%22%7D%5D%2C%22abstractNote%22%3A%22Terrain%20feature%20detection%20is%20a%20fundamental%20task%20in%20terrain%20analysis%20and%20landscape%20scene%20interpretation.%20Discovering%20where%20a%20specific%20feature%20%28i.e.%20sand%20dune%2C%20crater%2C%20etc.%29%20is%20located%20and%20how%20it%20evolves%20over%20time%20is%20essential%20for%20understanding%20landform%20processes%20and%20their%20impacts%20on%20the%20environment%2C%20ecosystem%2C%20and%20human%20population.%20Traditional%20induction-based%20approaches%20are%20challenged%20by%20their%20inefficiency%20for%20generalizing%20diverse%20and%20complex%20terrain%20features%20as%20well%20as%20their%20performance%20for%20scalable%20processing%20of%20the%20massive%20geospatial%20data%20available.%20This%20paper%20presents%20a%20new%20deep%20learning%20%28DL%29%20approach%20to%20support%20automatic%20detection%20of%20terrain%20features%20from%20remotely%20sensed%20images.%20The%20novelty%20of%20this%20work%20lies%20in%3A%20%281%29%20a%20terrain%20feature%20database%20containing%2012%2C000%20remotely%20sensed%20images%20%281%2C000%20original%20images%20and%2011%2C000%20derived%20images%20from%20data%20augmentation%29%20that%20supports%20data-driven%20model%20training%20and%20new%20discovery%3B%20%282%29%20a%20DL-based%20object%20detection%20network%20empowered%20by%20ensemble%20learning%20and%20deep%20and%20deeper%20convolutional%20neural%20networks%20to%20achieve%20high-accuracy%20object%20detection%3B%20and%20%283%29%20fine-tuning%20the%20model%5Cu2019s%20characteristics%20and%20behaviors%20to%20identify%20the%20best%20combination%20of%20hyperparameters%20and%20other%20network%20factors.%20The%20introduction%20of%20DL%20into%20geospatial%20applications%20is%20expected%20to%20contribute%20significantly%20to%20intelligent%20terrain%20analysis%2C%20landscape%20scene%20interpretation%2C%20and%20the%20maturation%20of%20spatial%20data%20science.%22%2C%22date%22%3A%222020-04-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2018.1542697%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2018.1542697%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-20T16%3A48%3A34Z%22%7D%7D%2C%7B%22key%22%3A%22API3KTAY%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Mubin%20et%20al.%22%2C%22parsedDate%22%3A%222019-10-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BMubin%2C%20N.A.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F01431161.2019.1569282%26%23039%3B%26gt%3BYoung%20and%20mature%20oil%20palm%20tree%20detection%20and%20counting%20using%20convolutional%20neural%20network%20deep%20learning%20method%26lt%3B%5C%2Fa%26gt%3B.%202019%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Young%20and%20mature%20oil%20palm%20tree%20detection%20and%20counting%20using%20convolutional%20neural%20network%20deep%20learning%20method%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nurulain%20Abd%22%2C%22lastName%22%3A%22Mubin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Eiswary%22%2C%22lastName%22%3A%22Nadarajoo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Helmi%20Zulhaidi%20Mohd%22%2C%22lastName%22%3A%22Shafri%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Alireza%22%2C%22lastName%22%3A%22Hamedianfar%22%7D%5D%2C%22abstractNote%22%3A%22Detection%20and%20counting%20of%20oil%20palm%20are%20important%20in%20oil%20palm%20plantation%20management.%20In%20this%20article%2C%20we%20use%20a%20deep%20learning%20approach%20to%20predict%20and%20count%20oil%20palms%20in%20satellite%20imagery.%20Previous%20oil%20palm%20detections%20commonly%20focus%20on%20detecting%20oil%20palm%20trees%20that%20do%20not%20have%20overlapping%20crowns.%20Besides%20this%2C%20there%20is%20a%20lack%20of%20research%20that%20builds%20separate%20detection%20system%20for%20young%20and%20mature%20oil%20palm%2C%20utilizing%20deep%20learning%20approach%20for%20oil%20palm%20detection%20and%20combining%20geographic%20information%20system%20%28GIS%29%20with%20deep%20learning%20approach.%20This%20research%20attempts%20to%20fill%20this%20gap%20by%20utilizing%20two%20different%20convolution%20neural%20networks%20%28CNNs%29%20to%20detect%20young%20and%20mature%20oil%20palm%20separately%20and%20uses%20GIS%20during%20data%20processing%20and%20result%20storage%20process.%20The%20initial%20architecture%20developed%20is%20based%20on%20a%20CNN%20called%20LeNet.%20The%20training%20process%20reduces%20loss%20using%20adaptive%20gradient%20algorithm%20with%20a%20mini%20batch%20of%20size%2020%20for%20all%20the%20training%20sets%20used.%20Then%2C%20we%20exported%20prediction%20results%20to%20GIS%20software%20and%20created%20oil%20palm%20prediction%20map%20for%20mature%20and%20young%20oil%20palm.%20Based%20on%20the%20proposed%20method%2C%20the%20overall%20accuracies%20for%20young%20and%20mature%20oil%20palm%20are%2095.11%25%20and%2092.96%25%2C%20respectively.%20Overall%2C%20the%20classifier%20performs%20well%20on%20previously%20unseen%20datasets%2C%20and%20is%5Cu00a0able%20to%20accurately%20detect%20oil%20palm%20from%20background%2C%20including%20plant%20shadows%20and%20other%20plants.%22%2C%22date%22%3A%222019-10-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F01431161.2019.1569282%22%2C%22ISSN%22%3A%220143-1161%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F01431161.2019.1569282%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T19%3A30%3A04Z%22%7D%7D%2C%7B%22key%22%3A%22D3BQR8YU%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Sublime%20and%20Kalinicheva%22%2C%22parsedDate%22%3A%222019-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSublime%2C%20J.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2072-4292%5C%2F11%5C%2F9%5C%2F1123%26%23039%3B%26gt%3BAutomatic%20Post-Disaster%20Damage%20Mapping%20Using%20Deep-Learning%20Techniques%20for%20Change%20Detection%3A%20Case%20Study%20of%20the%20Tohoku%20Tsunami%26lt%3B%5C%2Fa%26gt%3B.%202019%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Automatic%20Post-Disaster%20Damage%20Mapping%20Using%20Deep-Learning%20Techniques%20for%20Change%20Detection%3A%20Case%20Study%20of%20the%20Tohoku%20Tsunami%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22J%5Cu00e9r%5Cu00e9mie%22%2C%22lastName%22%3A%22Sublime%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ekaterina%22%2C%22lastName%22%3A%22Kalinicheva%22%7D%5D%2C%22abstractNote%22%3A%22Post-disaster%20damage%20mapping%20is%20an%20essential%20task%20following%20tragic%20events%20such%20as%20hurricanes%2C%20earthquakes%2C%20and%20tsunamis.%20It%20is%20also%20a%20time-consuming%20and%20risky%20task%20that%20still%20often%20requires%20the%20sending%20of%20experts%20on%20the%20ground%20to%20meticulously%20map%20and%20assess%20the%20damages.%20Presently%2C%20the%20increasing%20number%20of%20remote-sensing%20satellites%20taking%20pictures%20of%20Earth%20on%20a%20regular%20basis%20with%20programs%20such%20as%20Sentinel%2C%20ASTER%2C%20or%20Landsat%20makes%20it%20easy%20to%20acquire%20almost%20in%20real%20time%20images%20from%20areas%20struck%20by%20a%20disaster%20before%20and%20after%20it%20hits.%20While%20the%20manual%20study%20of%20such%20images%20is%20also%20a%20tedious%20task%2C%20progress%20in%20artificial%20intelligence%20and%20in%20particular%20deep-learning%20techniques%20makes%20it%20possible%20to%20analyze%20such%20images%20to%20quickly%20detect%20areas%20that%20have%20been%20flooded%20or%20destroyed.%20From%20there%2C%20it%20is%20possible%20to%20evaluate%20both%20the%20extent%20and%20the%20severity%20of%20the%20damages.%20In%20this%20paper%2C%20we%20present%20a%20state-of-the-art%20deep-learning%20approach%20for%20change%20detection%20applied%20to%20satellite%20images%20taken%20before%20and%20after%20the%20Tohoku%20tsunami%20of%202011.%20We%20compare%20our%20approach%20with%20other%20machine-learning%20methods%20and%20show%20that%20our%20approach%20is%20superior%20to%20existing%20techniques%20due%20to%20its%20unsupervised%20nature%2C%20good%20performance%2C%20and%20relative%20speed%20of%20analysis.%22%2C%22date%22%3A%222019%5C%2F1%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Frs11091123%22%2C%22ISSN%22%3A%222072-4292%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2072-4292%5C%2F11%5C%2F9%5C%2F1123%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-06-26T09%3A16%3A14Z%22%7D%7D%2C%7B%22key%22%3A%22JWIGRUI3%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20et%20al.%22%2C%22parsedDate%22%3A%222017-11-07%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLi%2C%20W.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3149808.3149814%26%23039%3B%26gt%3BRecognizing%20terrain%20features%20on%20terrestrial%20surface%20using%20a%20deep%20learning%20model%3A%20an%20example%20with%20crater%20detection%26lt%3B%5C%2Fa%26gt%3B.%202017%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Recognizing%20terrain%20features%20on%20terrestrial%20surface%20using%20a%20deep%20learning%20model%3A%20an%20example%20with%20crater%20detection%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenwen%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bin%22%2C%22lastName%22%3A%22Zhou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chia-Yu%22%2C%22lastName%22%3A%22Hsu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yixing%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fengbo%22%2C%22lastName%22%3A%22Ren%22%7D%5D%2C%22abstractNote%22%3A%22This%20paper%20exploits%20the%20use%20of%20a%20popular%20deep%20learning%20model%20-%20the%20faster-RCNN%20-%20to%20support%20automatic%20terrain%20feature%20detection%20and%20classification%20using%20a%20mixed%20set%20of%20optimal%20remote%20sensing%20and%20natural%20images.%20Crater%20detection%20is%20used%20as%20the%20case%20study%20in%20this%20research%20since%20this%20geomorphological%20feature%20provides%20important%20information%20about%20surface%20aging.%20Craters%2C%20such%20as%20impact%20craters%2C%20also%20effect%20global%20changes%20in%20many%20aspects%2C%20such%20as%20geography%2C%20topography%2C%20mineral%20and%20hydrocarbon%20production%2C%20etc.%20The%20collected%20data%20were%20labeled%20and%20the%20network%20was%20trained%20through%20a%20GPU%20server.%20Experimental%20results%20show%20that%20the%20faster-RCNN%20model%20coupled%20with%20a%20widely%20used%20convolutional%20network%20ZF-net%20performs%20well%20in%20detecting%20craters%20on%20the%20terrestrial%20surface.%22%2C%22date%22%3A%22November%207%2C%202017%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%201st%20Workshop%20on%20Artificial%20Intelligence%20and%20Deep%20Learning%20for%20Geographic%20Knowledge%20Discovery%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3149808.3149814%22%2C%22ISBN%22%3A%22978-1-4503-5498-1%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3149808.3149814%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-20T17%3A25%3A33Z%22%7D%7D%5D%7D
Metzger, N. et al. High-resolution population maps derived from Sentinel-1 and Sentinel-2. 2024
Li, W. et al. Assessment of a new GeoAI foundation model for flood inundation mapping. 2023
Representation Learning
5447768
representation learning
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22L5Z2TL9A%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Chu%20and%20Shahabi%22%2C%22parsedDate%22%3A%222025-08-26%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BChu%2C%20C.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2508.19305%26%23039%3B%26gt%3BGeo2Vec%3A%20Shape-%20and%20Distance-Aware%20Neural%20Representation%20of%20Geospatial%20Entities%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Geo2Vec%3A%20Shape-%20and%20Distance-Aware%20Neural%20Representation%20of%20Geospatial%20Entities%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chen%22%2C%22lastName%22%3A%22Chu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cyrus%22%2C%22lastName%22%3A%22Shahabi%22%7D%5D%2C%22abstractNote%22%3A%22Spatial%20representation%20learning%20is%20essential%20for%20GeoAI%20applications%20such%20as%20urban%20analytics%2C%20enabling%20the%20encoding%20of%20shapes%2C%20locations%2C%20and%20spatial%20relationships%20%28topological%20and%20distance-based%29%20of%20geo-entities%20like%20points%2C%20polylines%2C%20and%20polygons.%20Existing%20methods%20either%20target%20a%20single%20geo-entity%20type%20or%2C%20like%20Poly2Vec%2C%20decompose%20entities%20into%20simpler%20components%20to%20enable%20Fourier%20transformation%2C%20introducing%20high%20computational%20cost.%20Moreover%2C%20since%20the%20transformed%20space%20lacks%20geometric%20alignment%2C%20these%20methods%20rely%20on%20uniform%2C%20non-adaptive%20sampling%2C%20which%20blurs%20fine-grained%20features%20like%20edges%20and%20boundaries.%20To%20address%20these%20limitations%2C%20we%20introduce%20Geo2Vec%2C%20a%20novel%20method%20inspired%20by%20signed%20distance%20fields%20%28SDF%29%20that%20operates%20directly%20in%20the%20original%20space.%20Geo2Vec%20adaptively%20samples%20points%20and%20encodes%20their%20signed%20distances%20%28positive%20outside%2C%20negative%20inside%29%2C%20capturing%20geometry%20without%20decomposition.%20A%20neural%20network%20trained%20to%20approximate%20the%20SDF%20produces%20compact%2C%20geometry-aware%2C%20and%20unified%20representations%20for%20all%20geo-entity%20types.%20Additionally%2C%20we%20propose%20a%20rotation-invariant%20positional%20encoding%20to%20model%20high-frequency%20spatial%20variations%20and%20construct%20a%20structured%20and%20robust%20embedding%20space%20for%20downstream%20GeoAI%20models.%20Empirical%20results%20show%20that%20Geo2Vec%20consistently%20outperforms%20existing%20methods%20in%20representing%20shape%20and%20location%2C%20capturing%20topological%20and%20distance%20relationships%2C%20and%20achieving%20greater%20efficiency%20in%20real-world%20GeoAI%20applications.%20Code%20and%20Data%20can%20be%20found%20at%3A%20https%3A%5C%2F%5C%2Fgithub.com%5C%2Fchuchen2017%5C%2FGeoNeuralRepresentation.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2508.19305%22%2C%22date%22%3A%222025-08-26%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2508.19305%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2508.19305%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-02T14%3A17%3A52Z%22%7D%7D%2C%7B%22key%22%3A%22CNJEURET%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Mai%20et%20al.%22%2C%22parsedDate%22%3A%222024-11-22%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BMai%2C%20G.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3678717.3691246%26%23039%3B%26gt%3BSRL%3A%20Towards%20a%20General-Purpose%20Framework%20for%20Spatial%20Representation%20Learning%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22SRL%3A%20Towards%20a%20General-Purpose%20Framework%20for%20Spatial%20Representation%20Learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gengchen%22%2C%22lastName%22%3A%22Mai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiaobai%22%2C%22lastName%22%3A%22Yao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yiqun%22%2C%22lastName%22%3A%22Xie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jinmeng%22%2C%22lastName%22%3A%22Rao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hao%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qing%22%2C%22lastName%22%3A%22Zhu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ziyuan%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ni%22%2C%22lastName%22%3A%22Lao%22%7D%5D%2C%22abstractNote%22%3A%22Representation%20learning%20%28RL%29%20techniques%20are%20widely%20adopted%20in%20areas%20such%20as%20natural%20language%20processing%20and%20computer%20vision%2C%20with%20prominent%20examples%20such%20as%20attention%20and%20ConvNet%20architectures.%20In%20comparison%2C%20many%20GeoAI%20works%20still%20rely%20on%20feature%20engineering%20or%20data%20conversion%20to%20represent%20spatial%20data%20%28e.g.%2C%20points%2C%20polylines%2C%20polygons%2C%203D%20building%20models%2C%20etc.%29%20as%20features%20in%20formats%20that%20are%20easier%20for%20neural%20networks%20to%20handle.%20The%20neural%20network%20architectures%20remain%20unchanged%2C%20and%20the%20need%20for%20feature%20engineering%20has%20become%20a%20bottleneck%20for%20applying%20deep%20learning%20to%20new%20tasks%20in%20the%20age%20of%20big%20data.%20In%20this%20paper%2C%20we%20advocate%20the%20idea%20of%20developing%20learnable%20spatial%20representation%20modules%2C%20which%20not%20only%20enable%20spatial%20reasoning%20but%20also%20enable%20neural%20nets%20to%20directly%20consume%20%28i.e.%2C%20encoding%29%20or%20generate%20%28i.e.%2C%20decoding%29%20spatial%20data.%20We%20propose%20Spatial%20Representation%20Learning%20%28SRL%29%2C%20a%20new%20general-purpose%20representation%20learning%20framework%20for%20spatial%20reasoning.%20We%20discuss%20the%20key%20challenges%20of%20spatial%20representation%20learning%20including%20multi-scale%20RL%2C%20continuous%20RL%2C%20shape-centric%20RL%2C%20noise-robust%20RL%2C%20heterogeneity-aware%20RL%2C%20and%20fairness-aware%20RL.%20We%20also%20discuss%20the%20critical%20role%20and%20potential%20of%20SRL%20in%20various%20geospatial%20subdomains%20and%20how%20this%20technique%20can%20lead%20to%20a%20new%20generation%20of%20GeoAI.%22%2C%22date%22%3A%22November%2022%2C%202024%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%2032nd%20ACM%20International%20Conference%20on%20Advances%20in%20Geographic%20Information%20Systems%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3678717.3691246%22%2C%22ISBN%22%3A%22979-8-4007-1107-7%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3678717.3691246%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-26T19%3A15%3A05Z%22%7D%7D%2C%7B%22key%22%3A%22G35MEHXY%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Siampou%20et%20al.%22%2C%22parsedDate%22%3A%222024-08-27%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSiampou%2C%20M.D.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2408.14806%26%23039%3B%26gt%3BPoly2Vec%3A%20Polymorphic%20Encoding%20of%20Geospatial%20Objects%20for%20Spatial%20Reasoning%20with%20Deep%20Neural%20Networks%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Poly2Vec%3A%20Polymorphic%20Encoding%20of%20Geospatial%20Objects%20for%20Spatial%20Reasoning%20with%20Deep%20Neural%20Networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Maria%20Despoina%22%2C%22lastName%22%3A%22Siampou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jialiang%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22John%22%2C%22lastName%22%3A%22Krumm%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cyrus%22%2C%22lastName%22%3A%22Shahabi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hua%22%2C%22lastName%22%3A%22Lu%22%7D%5D%2C%22abstractNote%22%3A%22Encoding%20geospatial%20data%20is%20crucial%20for%20enabling%20machine%20learning%20%28ML%29%20models%20to%20perform%20tasks%20that%20require%20spatial%20reasoning%2C%20such%20as%20identifying%20the%20topological%20relationships%20between%20two%20different%20geospatial%20objects.%20However%2C%20existing%20encoding%20methods%20are%20limited%20as%20they%20are%20typically%20customized%20to%20handle%20only%20specific%20types%20of%20spatial%20data%2C%20which%20impedes%20their%20applicability%20across%20different%20downstream%20tasks%20where%20multiple%20data%20types%20coexist.%20To%20address%20this%2C%20we%20introduce%20Poly2Vec%2C%20an%20encoding%20framework%20that%20unifies%20the%20modeling%20of%20different%20geospatial%20objects%2C%20including%202D%20points%2C%20polylines%2C%20and%20polygons%2C%20irrespective%20of%20the%20downstream%20task.%20We%20leverage%20the%20power%20of%20the%202D%20Fourier%20transform%20to%20encode%20useful%20spatial%20properties%2C%20such%20as%20shape%20and%20location%2C%20from%20geospatial%20objects%20into%20fixed-length%20vectors.%20These%20vectors%20are%20then%20inputted%20into%20neural%20network%20models%20for%20spatial%20reasoning%20tasks.This%20unified%20approach%20eliminates%20the%20need%20to%20develop%20and%20train%20separate%20models%20for%20each%20distinct%20spatial%20type.%20We%20evaluate%20Poly2Vec%20on%20both%20synthetic%20and%20real%20datasets%20of%20mixed%20geometry%20types%20and%20verify%20its%20consistent%20performance%20across%20several%20downstream%20spatial%20reasoning%20tasks.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2408.14806%22%2C%22date%22%3A%222024-08-27%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2408.14806%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2408.14806%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-10T19%3A53%3A40Z%22%7D%7D%2C%7B%22key%22%3A%22DVR6VR5N%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yu%20et%20al.%22%2C%22parsedDate%22%3A%222024-08-24%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYu%2C%20D.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3637528.3671738%26%23039%3B%26gt%3BPolygonGNN%3A%20Representation%20Learning%20for%20Polygonal%20Geometries%20with%20Heterogeneous%20Visibility%20Graph%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22PolygonGNN%3A%20Representation%20Learning%20for%20Polygonal%20Geometries%20with%20Heterogeneous%20Visibility%20Graph%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dazhou%22%2C%22lastName%22%3A%22Yu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yuntong%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yun%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Liang%22%2C%22lastName%22%3A%22Zhao%22%7D%5D%2C%22abstractNote%22%3A%22Polygon%20representation%20learning%20is%20essential%20for%20diverse%20applications%2C%20encompassing%20tasks%20such%20as%20shape%20coding%2C%20building%20pattern%20classification%2C%20and%20geographic%20question%20answering.%20While%20recent%20years%20have%20seen%20considerable%20advancements%20in%20this%20field%2C%20much%20of%20the%20focus%20has%20been%20on%20single%20polygons%2C%20overlooking%20the%20intricate%20inner-%20and%20inter-polygonal%20relationships%20inherent%20in%20multipolygons.%20To%20address%20this%20gap%2C%20our%20study%20introduces%20a%20comprehensive%20framework%20specifically%20designed%20for%20learning%20representations%20of%20polygonal%20geometries%2C%20particularly%20multipolygons.%20Central%20to%20our%20approach%20is%20the%20incorporation%20of%20a%20heterogeneous%20visibility%20graph%2C%20which%20seamlessly%20integrates%20both%20inner-%20and%20inter-polygonal%20relationships.%20To%20enhance%20computational%20efficiency%20and%20minimize%20graph%20redundancy%2C%20we%20implement%20a%20heterogeneous%20spanning%20tree%20sampling%20method.%20Additionally%2C%20we%20devise%20a%20rotation-translation%20invariant%20geometric%20representation%2C%20ensuring%20broader%20applicability%20across%20diverse%20scenarios.%20Finally%2C%20we%20introduce%20Multipolygon-GNN%2C%20a%20novel%20model%20tailored%20to%20leverage%20the%20spatial%20and%20semantic%20heterogeneity%20inherent%20in%20the%20visibility%20graph.%20Experiments%20on%20five%20real-world%20and%20synthetic%20datasets%20demonstrate%20its%20ability%20to%20capture%20informative%20representations%20for%20polygonal%20geometries.%22%2C%22date%22%3A%22August%2024%2C%202024%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%2030th%20ACM%20SIGKDD%20Conference%20on%20Knowledge%20Discovery%20and%20Data%20Mining%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3637528.3671738%22%2C%22ISBN%22%3A%22979-8-4007-0490-1%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3637528.3671738%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-26T19%3A14%3A00Z%22%7D%7D%2C%7B%22key%22%3A%229Z96P2GH%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Mai%20et%20al.%22%2C%22parsedDate%22%3A%222023-08-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BMai%2C%20G.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0924271623001818%26%23039%3B%26gt%3BSphere2Vec%3A%20A%20general-purpose%20location%20representation%20learning%20over%20a%20spherical%20surface%20for%20large-scale%20geospatial%20predictions%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Sphere2Vec%3A%20A%20general-purpose%20location%20representation%20learning%20over%20a%20spherical%20surface%20for%20large-scale%20geospatial%20predictions%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gengchen%22%2C%22lastName%22%3A%22Mai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yao%22%2C%22lastName%22%3A%22Xuan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenyun%22%2C%22lastName%22%3A%22Zuo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yutong%22%2C%22lastName%22%3A%22He%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiaming%22%2C%22lastName%22%3A%22Song%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stefano%22%2C%22lastName%22%3A%22Ermon%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Krzysztof%22%2C%22lastName%22%3A%22Janowicz%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ni%22%2C%22lastName%22%3A%22Lao%22%7D%5D%2C%22abstractNote%22%3A%22Generating%20learning-friendly%20representations%20for%20points%20in%20space%20is%20a%20fundamental%20and%20long-standing%20problem%20in%20machine%20learning.%20Recently%2C%20multi-scale%20encoding%20schemes%20%28such%20as%20Space2Vec%20and%20NeRF%29%20were%20proposed%20to%20directly%20encode%20any%20point%20in%202D%20or%203D%20Euclidean%20space%20as%20a%20high-dimensional%20vector%2C%20and%20has%20been%20successfully%20applied%20to%20various%20%28geo%29spatial%20prediction%20and%20generative%20tasks.%20However%2C%20all%20current%202D%20and%203D%20location%20encoders%20are%20designed%20to%20model%20point%20distances%20in%20Euclidean%20space.%20So%20when%20applied%20to%20large-scale%20real-world%20GPS%20coordinate%20datasets%20%28e.g.%2C%20species%20or%20satellite%20images%20taken%20all%20over%20the%20world%29%2C%20which%20require%20distance%20metric%20learning%20on%20the%20spherical%20surface%2C%20both%20types%20of%20models%20can%20fail%20due%20to%20the%20map%20projection%20distortion%20problem%20%282D%29%20and%20the%20spherical-to-Euclidean%20distance%20approximation%20error%20%283D%29.%20To%20solve%20these%20problems%2C%20we%20propose%20a%20multi-scale%20location%20encoder%20called%20Sphere2Vec%20which%20can%20preserve%20spherical%20distances%20when%20encoding%20point%20coordinates%20on%20a%20spherical%20surface.%20We%20developed%20a%20unified%20view%20of%20distance-reserving%20encoding%20on%20spheres%20based%20on%20the%20Double%20Fourier%20Sphere%20%28DFS%29.%20We%20also%20provide%20theoretical%20proof%20that%20the%20Sphere2Vec%20encoding%20preserves%20the%20spherical%20surface%20distance%20between%20any%20two%20points%2C%20while%20existing%20encoding%20schemes%20such%20as%20Space2Vec%20and%20NeRF%20do%20not.%20Experiments%20on%2020%20synthetic%20datasets%20show%20that%20Sphere2Vec%20can%20outperform%20all%20baseline%20models%20including%20the%20state-of-the-art%20%28SOTA%29%202D%20location%20encoder%20%28i.e.%2C%20Space2Vec%29%20and%203D%20encoder%20NeRF%20on%20all%20these%20datasets%20with%20up%20to%2030.8%25%20error%20rate%20reduction.%20We%20then%20apply%20Sphere2Vec%20to%20three%20geo-aware%20image%20classification%20tasks%20-%20fine-grained%20species%20recognition%2C%20Flickr%20image%20recognition%2C%20and%20remote%20sensing%20image%20classification.%20Results%20on%207%20real-world%20datasets%20show%20the%20superiority%20of%20Sphere2Vec%20over%20multiple%202D%20and%203D%20location%20encoders%20on%20all%20three%20tasks.%20Further%20analysis%20shows%20that%20Sphere2Vec%20outperforms%20other%20location%20encoder%20models%2C%20especially%20in%20the%20polar%20regions%20and%20data-sparse%20areas%20because%20of%20its%20nature%20for%20spherical%20surface%20distance%20preservation.%20Code%20and%20data%20of%20this%20work%20are%20available%20at%20https%3A%5C%2F%5C%2Fgengchenmai.github.io%5C%2Fsphere2vec-website%5C%2F.%22%2C%22date%22%3A%222023-08-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.isprsjprs.2023.06.016%22%2C%22ISSN%22%3A%220924-2716%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0924271623001818%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-26T19%3A12%3A35Z%22%7D%7D%2C%7B%22key%22%3A%228YFZ6DRN%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Mai%20et%20al.%22%2C%22parsedDate%22%3A%222023-04-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BMai%2C%20G.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs10707-022-00481-2%26%23039%3B%26gt%3BTowards%20general-purpose%20representation%20learning%20of%20polygonal%20geometries%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Towards%20general-purpose%20representation%20learning%20of%20polygonal%20geometries%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gengchen%22%2C%22lastName%22%3A%22Mai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chiyu%22%2C%22lastName%22%3A%22Jiang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weiwei%22%2C%22lastName%22%3A%22Sun%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rui%22%2C%22lastName%22%3A%22Zhu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yao%22%2C%22lastName%22%3A%22Xuan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ling%22%2C%22lastName%22%3A%22Cai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Krzysztof%22%2C%22lastName%22%3A%22Janowicz%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stefano%22%2C%22lastName%22%3A%22Ermon%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ni%22%2C%22lastName%22%3A%22Lao%22%7D%5D%2C%22abstractNote%22%3A%22Neural%20network%20representation%20learning%20for%20spatial%20data%20%28e.g.%2C%20points%2C%20polylines%2C%20polygons%2C%20and%20networks%29%20is%20a%20common%20need%20for%20geographic%20artificial%20intelligence%20%28GeoAI%29%20problems.%20In%20recent%20years%2C%20many%20advancements%20have%20been%20made%20in%20representation%20learning%20for%20points%2C%20polylines%2C%20and%20networks%2C%20whereas%20little%20progress%20has%20been%20made%20for%20polygons%2C%20especially%20complex%20polygonal%20geometries.%20In%20this%20work%2C%20we%20focus%20on%20developing%20a%20general-purpose%20polygon%20encoding%20model%2C%20which%20can%20encode%20a%20polygonal%20geometry%20%28with%20or%20without%20holes%2C%20single%20or%20multipolygons%29%20into%20an%20embedding%20space.%20The%20result%20embeddings%20can%20be%20leveraged%20directly%20%28or%20finetuned%29%20for%20downstream%20tasks%20such%20as%20shape%20classification%2C%20spatial%20relation%20prediction%2C%20building%20pattern%20classification%2C%20cartographic%20building%20generalization%2C%20and%20so%20on.%20To%20achieve%20model%20generalizability%20guarantees%2C%20we%20identify%20a%20few%20desirable%20properties%20that%20the%20encoder%20should%20satisfy%3A%20loop%20origin%20invariance%2C%20trivial%20vertex%20invariance%2C%20part%20permutation%20invariance%2C%20and%20topology%20awareness.%20We%20explore%20two%20different%20designs%20for%20the%20encoder%3A%20one%20derives%20all%20representations%20in%20the%20spatial%20domain%20and%20can%20naturally%20capture%20local%20structures%20of%20polygons%3B%20the%20other%20leverages%20spectral%20domain%20representations%20and%20can%20easily%20capture%20global%20structures%20of%20polygons.%20For%20the%20spatial%20domain%20approach%20we%20propose%20ResNet1D%2C%20a%201D%20CNN-based%20polygon%20encoder%2C%20which%20uses%20circular%20padding%20to%20achieve%20loop%20origin%20invariance%20on%20simple%20polygons.%20For%20the%20spectral%20domain%20approach%20we%20develop%20NUFTspec%20based%20on%20Non-Uniform%20Fourier%20Transformation%20%28NUFT%29%2C%20which%20naturally%20satisfies%20all%20the%20desired%20properties.%20We%20conduct%20experiments%20on%20two%20different%20tasks%3A%201%29%20polygon%20shape%20classification%20based%20on%20the%20commonly%20used%20MNIST%20dataset%3B%202%29%20polygon-based%20spatial%20relation%20prediction%20based%20on%20two%20new%20datasets%20%28DBSR-46K%20and%20DBSR-cplx46K%29%20constructed%20from%20OpenStreetMap%20and%20DBpedia.%20Our%20results%20show%20that%20NUFTspec%20and%20ResNet1D%20outperform%20multiple%20existing%20baselines%20with%20significant%20margins.%20While%20ResNet1D%20suffers%20from%20model%20performance%20degradation%20after%20shape-invariance%20geometry%20modifications%2C%20NUFTspec%5Cu00a0is%20very%20robust%20to%20these%20modifications%20due%20to%20the%20nature%20of%20the%20NUFT%20representation.%20NUFTspec%20is%20able%20to%20jointly%20consider%20all%20parts%20of%20a%20multipolygon%20and%20their%20spatial%20relations%20during%20prediction%20while%20ResNet1D%20can%20recognize%20the%20shape%20details%20which%20are%20sometimes%20important%20for%20classification.%20This%20result%20points%20to%20a%20promising%20research%20direction%20of%20combining%20spatial%20and%20spectral%20representations.%22%2C%22date%22%3A%222023-04-01%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2Fs10707-022-00481-2%22%2C%22ISSN%22%3A%221573-7624%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs10707-022-00481-2%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-02T14%3A18%3A11Z%22%7D%7D%2C%7B%22key%22%3A%2257BLZCWM%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Mai%20et%20al.%22%2C%22parsedDate%22%3A%222022-04-03%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BMai%2C%20G.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2021.2004602%26%23039%3B%26gt%3BA%20review%20of%20location%20encoding%20for%20GeoAI%3A%20methods%20and%20applications%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20review%20of%20location%20encoding%20for%20GeoAI%3A%20methods%20and%20applications%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gengchen%22%2C%22lastName%22%3A%22Mai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Krzysztof%22%2C%22lastName%22%3A%22Janowicz%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yingjie%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Song%22%2C%22lastName%22%3A%22Gao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bo%22%2C%22lastName%22%3A%22Yan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rui%22%2C%22lastName%22%3A%22Zhu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ling%22%2C%22lastName%22%3A%22Cai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ni%22%2C%22lastName%22%3A%22Lao%22%7D%5D%2C%22abstractNote%22%3A%22A%20common%20need%20for%20artificial%20intelligence%20models%20in%20the%20broader%20geoscience%20is%20to%20encode%20various%20types%20of%20spatial%20data%2C%20such%20as%20points%2C%20polylines%2C%20polygons%2C%20graphs%2C%20or%20rasters%2C%20in%20a%20hidden%20embedding%20space%20so%20that%20they%20can%20be%20readily%20incorporated%20into%20deep%20learning%20models.%20One%20fundamental%20step%20is%20to%20encode%20a%20single%20point%20location%20into%20an%20embedding%20space%2C%20such%20that%20this%20embedding%20is%20learning-friendly%20for%20downstream%20machine%20learning%20models.%20We%20call%20this%20process%20location%20encoding.%20However%2C%20there%20lacks%20a%20systematic%20review%20on%20location%20encoding%2C%20its%20potential%20applications%2C%20and%20key%20challenges%20that%20need%20to%20be%20addressed.%20This%20paper%20aims%20to%20fill%20this%20gap.%20We%20first%20provide%20a%20formal%20definition%20of%20location%20encoding%2C%20and%20discuss%20the%20necessity%20of%20it%20for%20GeoAI%20research.%20Next%2C%20we%20provide%20a%20comprehensive%20survey%20about%20the%20current%20landscape%20of%20location%20encoding%20research.%20We%20classify%20location%20encoding%20models%20into%20different%20categories%20based%20on%20their%20inputs%20and%20encoding%20methods%2C%20and%20compare%20them%20based%20on%20whether%20they%20are%20parametric%2C%20multi-scale%2C%20distance%20preserving%2C%20and%20direction%20aware.%20We%20demonstrate%20that%20existing%20location%20encoders%20can%20be%20unified%20under%20one%20formulation%20framework.%20We%20also%20discuss%20the%20application%20of%20location%20encoding.%20Finally%2C%20we%20point%20out%20several%20challenges%20that%20need%20to%20be%20solved%20in%20the%20future.%22%2C%22date%22%3A%222022-04-03%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2021.2004602%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2021.2004602%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-19T19%3A13%3A17Z%22%7D%7D%5D%7D
Chu, C. et al. Geo2Vec: Shape- and Distance-Aware Neural Representation of Geospatial Entities. 2025
Mai, G. et al. SRL: Towards a General-Purpose Framework for Spatial Representation Learning. 2024
Siampou, M.D. et al. Poly2Vec: Polymorphic Encoding of Geospatial Objects for Spatial Reasoning with Deep Neural Networks. 2024
Mai, G. et al. Towards general-purpose representation learning of polygonal geometries. 2023
Mai, G. et al. A review of location encoding for GeoAI: methods and applications. 2022
Pattern Detection (Lines)
5447768
pattern detection, lines
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22SJJ5BP9Q%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wang%20et%20al.%22%2C%22parsedDate%22%3A%222024-12-31%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWang%2C%20A.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F17538947.2024.2356123%26%23039%3B%26gt%3BA%20detection%20method%20for%20road%20network%20interchanges%20with%20the%20MeshCNN%20based%20on%20Delaunay%20triangulation%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20detection%20method%20for%20road%20network%20interchanges%20with%20the%20MeshCNN%20based%20on%20Delaunay%20triangulation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Andong%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fang%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yue%22%2C%22lastName%22%3A%22Qiu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xianyong%22%2C%22lastName%22%3A%22Gong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Renjian%22%2C%22lastName%22%3A%22Zhai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chengyi%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jichong%22%2C%22lastName%22%3A%22Yin%22%7D%5D%2C%22abstractNote%22%3A%22Due%20to%20their%20unstructured%20characteristics%2C%20mature%20convolutional%20neural%20network%20%28CNN%29%20models%20often%20have%20difficulty%20performing%20spatial%20analysis%20with%20vector%20data.%20Current%20studies%20used%20graph%20neural%20network%20%28GCN%29%20models%20to%20address%20this%20problem.%20However%2C%20the%20definition%20of%20cognition%20factors%20involves%20uncertainties%2C%20making%20it%20challenging%20to%20accurately%20and%20comprehensively%20define%20these%20factors.%20In%20this%20paper%2C%20the%20road%20interchange%20detection%20task%20is%20taken%20as%20an%20example%20to%20introduce%20the%20MeshCNN%2C%20a%20deep%20learning%20model%20based%20on%20triangular%20mesh%20data%2C%20aiming%20to%20provide%20a%20new%20solution%20for%20spatial%20analysis%20of%20vector%20data.%20A%20triangular%20edge%20classification%20model%20is%20first%20trained%20with%20simple%20input%20features.%20Then%2C%20interchanges%20are%20detected%20based%20on%20the%20classification%20results%20with%20an%20adaptive%20method.%20Experiments%20were%20conducted%20on%20real-world%20road%20network%20data%20from%20four%20cities.%20The%20results%20reveal%20that%20the%20proposed%20method%20outperformed%20the%20existing%20methods%20with%20precision%20and%20recall%20rate%20of%2089.36%25%20and%2079.25%25%20for%20interchange%20detection%20on%20the%20total%20datasets.%20Furthermore%2C%20our%20proposed%20method%20can%20also%20detect%20interchanges%20in%20other%20regions%20more%20easily%20than%20the%20GCN%20method.%22%2C%22date%22%3A%222024-12-31%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1080%5C%2F17538947.2024.2356123%22%2C%22ISSN%22%3A%221753-8947%2C%201753-8955%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F17538947.2024.2356123%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-12T12%3A15%3A06Z%22%7D%7D%2C%7B%22key%22%3A%222JM9PCIA%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yang%20et%20al.%22%2C%22parsedDate%22%3A%222024-09-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYang%2C%20M.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10095020.2023.2264337%26%23039%3B%26gt%3BClassification%20of%20urban%20interchange%20patterns%20using%20a%20model%20combining%20shape%20context%20descriptor%20and%20graph%20convolutional%20neural%20network%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Classification%20of%20urban%20interchange%20patterns%20using%20a%20model%20combining%20shape%20context%20descriptor%20and%20graph%20convolutional%20neural%20network%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Minjun%22%2C%22lastName%22%3A%22Cao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lingya%22%2C%22lastName%22%3A%22Cheng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Huiping%22%2C%22lastName%22%3A%22Jiang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tinghua%22%2C%22lastName%22%3A%22Ai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiongfeng%22%2C%22lastName%22%3A%22Yan%22%7D%5D%2C%22abstractNote%22%3A%22Pattern%20recognition%20is%20critical%20to%20map%20data%20handling%20and%20their%20applications.%20This%20study%20presents%20a%20model%20that%20combines%20the%20Shape%20Context%20%28SC%29%20descriptor%20and%20Graph%20Convolutional%20Neural%20Network%20%28GCNN%29%20to%20classify%20the%20patterns%20of%20interchanges%2C%20which%20are%20indispensable%20parts%20of%20urban%20road%20networks.%20In%20the%20SC-GCNN%20model%2C%20an%20interchange%20is%20modeled%20as%20a%20graph%2C%20wherein%20nodes%20and%20edges%20represent%20the%20interchange%20segments%20and%20their%20connections%2C%20respectively.%20Then%2C%20a%20novel%20SC%20descriptor%20is%20implemented%20to%20describe%20the%20contextual%20information%20of%20each%20interchange%20segment%20and%20serve%20as%20descriptive%20features%20of%20graph%20nodes.%20Finally%2C%20a%20GCNN%20is%20designed%20by%20combining%20graph%20convolution%20and%20pooling%20operations%20to%20process%20the%20constructed%20graphs%20and%20classify%20the%20interchange%20patterns.%20The%20SC-GCNN%20model%20was%20validated%20using%20interchange%20samples%20obtained%20from%20the%20road%20networks%20of%2015%20cities%20downloaded%20from%20OpenStreetMap.%20The%20classification%20accuracy%20was%2087.06%25%2C%20which%20was%20higher%20than%20that%20of%20the%20image-based%20AlexNet%2C%20GoogLeNet%2C%20and%20Random%20Forest%20models.%22%2C%22date%22%3A%222024-09-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F10095020.2023.2264337%22%2C%22ISSN%22%3A%221009-5020%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10095020.2023.2264337%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-26T18%3A46%3A27Z%22%7D%7D%2C%7B%22key%22%3A%2239RQNZKU%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Gao%20et%20al.%22%2C%22parsedDate%22%3A%222024-05-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGao%2C%20A.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS156984322400164X%26%23039%3B%26gt%3BA%20vector-based%20coastline%20shape%20classification%20approach%20using%20sequential%20deep%20learning%20model%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20vector-based%20coastline%20shape%20classification%20approach%20using%20sequential%20deep%20learning%20model%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Aji%22%2C%22lastName%22%3A%22Gao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tinghua%22%2C%22lastName%22%3A%22Ai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Huafei%22%2C%22lastName%22%3A%22Yu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tianyuan%22%2C%22lastName%22%3A%22Xiao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yuejun%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jingzhong%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haosheng%22%2C%22lastName%22%3A%22Huang%22%7D%5D%2C%22abstractNote%22%3A%22Coastlines%20play%20a%20crucial%20role%20in%20coastal%20dynamics%2C%20and%20classifying%20their%20shape%20is%20an%20essential%20requirement%20for%20coastal%20analysis.%20With%20the%20development%20of%20Coastal%20Management%20Systems%20%28CMS%29%2C%20structured%20and%20high-resolution%20vector-format%20coastlines%20have%20become%20increasingly%20available%20compared%20to%20remote%20sensing%20image%20coastlines.%20However%2C%20due%20to%20the%20challenges%20of%20accurate%20description%20and%20ambiguous%20classification%20rules%2C%20automatic%20classification%20of%20vector%20coastlines%20has%20been%20a%20difficult%20but%20urgent%20problem%20to%20solve.%20In%20this%20paper%2C%20we%20propose%20a%20data-driven%20approach%20for%20classifying%20the%20shape%20of%20vector%20coastlines%2C%20according%20to%20their%20morphological%20characteristics.%20The%20method%20utilizes%20a%20sequence-based%20deep%20learning%20algorithm%20to%20model%20and%20classify%20coastline%20segments.%20We%20construct%20a%20dataset%20including%20five%20representative%20types%20of%20vector%20coastlines%2C%20train%20and%20evaluate%20the%20model%20using%20this%20dataset.%20The%20evaluation%20results%20show%20that%20the%20proposed%20method%20outperforms%20all%20baselines%2C%20achieving%20a%20classification%20accuracy%20of%2093.20%25.%20This%20method%20can%20be%20integrated%20into%20existing%20Coastal%20Management%20Systems%20to%20enhance%20their%20morphological%20analysis%20functions%2C%20making%20a%20valuable%20contribution%20to%20the%20applications%20of%20Artificial%20Intelligence%20%28AI%29%20in%20coastal%20management.%22%2C%22date%22%3A%222024-05-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.jag.2024.103810%22%2C%22ISSN%22%3A%221569-8432%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS156984322400164X%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-27T13%3A20%3A14Z%22%7D%7D%2C%7B%22key%22%3A%22ISWZDHHB%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLi%2C%20P.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2264756%26%23039%3B%26gt%3BMultiLineStringNet%3A%20a%20deep%20neural%20network%20for%20linear%20feature%20set%20recognition%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22MultiLineStringNet%3A%20a%20deep%20neural%20network%20for%20linear%20feature%20set%20recognition%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pengbo%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haowen%22%2C%22lastName%22%3A%22Yan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiaomin%22%2C%22lastName%22%3A%22Lu%22%7D%5D%2C%22abstractNote%22%3A%22Pattern%20recognition%20of%20linear%20feature%20sets%2C%20such%20as%20river%20networks%2C%20road%20networks%2C%20and%20contour%20clusters%2C%20is%20essential%20in%20cartography%20and%20geographic%20information%20science.%20Previous%20studies%20have%20investigated%20many%20methods%20to%20identify%20the%20patterns%20of%20linear%20feature%20sets%3B%20the%20key%20to%20each%20of%20these%20studies%20is%20to%20generate%20a%20reasonable%20and%20computable%20representation%20for%20each%20set.%20However%2C%20most%20existing%20methods%20are%20only%20designed%20for%20a%20specific%20task%20or%20data%20type%20and%20cannot%20provide%20a%20general%20solution%20for%20formalizing%20linear%20feature%20sets%20owing%20to%20their%20complex%20geometric%20characteristics%2C%20spatial%20relations%20and%20distributions.%20In%20addition%2C%20some%20methods%20require%20human%20involvement%20to%20specify%20characteristics%2C%20choose%20parameters%2C%20and%20determine%20the%20weights%20of%20different%20measures.%20To%20reduce%20human%20intervention%20and%20improve%20adaptability%20to%20various%20feature%20types%2C%20this%20paper%20proposes%20a%20novel%20deep%20learning%20architecture%20for%20learning%20the%20representations%20of%20linear%20feature%20sets.%20The%20presented%20model%20accepts%20vector%20data%20directly%20without%20extra%20data%20conversion%20and%20feature%20extraction.%20After%20generating%20local%2C%20neighborhood%2C%20and%20global%20representations%20of%20inputs%2C%20the%20representations%20are%20aggregated%20accordingly%20to%20perform%20pattern%20recognition%20tasks%2C%20including%20classification%20and%20segmentation.%20In%20the%20experiments%2C%20building%20groups%20classification%20and%20road%20interchanges%20segmentation%20achieved%20accuracies%20of%2098%25%20and%2089%25%2C%20respectively%2C%20indicating%20the%20model%5Cu2019s%20effectiveness%20and%20adaptability.%22%2C%22date%22%3A%222024-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2023.2264756%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2264756%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-18T12%3A56%3A21Z%22%7D%7D%2C%7B%22key%22%3A%229E47W92G%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Chen%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BChen%2C%20W.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0169204623002207%26%23039%3B%26gt%3BGlobal%20urban%20road%20network%20patterns%3A%20Unveiling%20multiscale%20planning%20paradigms%20of%20144%20cities%20with%20a%20novel%20deep%20learning%20approach%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Global%20urban%20road%20network%20patterns%3A%20Unveiling%20multiscale%20planning%20paradigms%20of%20144%20cities%20with%20a%20novel%20deep%20learning%20approach%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wangyang%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Huiming%22%2C%22lastName%22%3A%22Huang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shunyi%22%2C%22lastName%22%3A%22Liao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Feng%22%2C%22lastName%22%3A%22Gao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Filip%22%2C%22lastName%22%3A%22Biljecki%22%7D%5D%2C%22abstractNote%22%3A%22Urban%20road%20networks%20%28URNs%29%20are%20ubiquitous%20and%20essential%20components%20of%20cities.%20Visually%2C%20they%20present%20diverse%20patterns%20that%20embody%20latent%20planning%20principles.%20However%2C%20we%20still%20lack%20a%20global%20insight%20into%20such%20patterns.%20In%20this%20paper%2C%20we%20propose%20a%20scalable%20deep%20learning-based%20framework%20to%20automate%20accurate%20and%20multiscale%20classification%20of%20road%20network%20patterns%20in%20cities%20and%20present%20a%20comprehensive%20global%20implementation%20on%20144%20major%20cities%20around%20the%20world%2C%20yielding%20their%20multiscale%20pattern%20profiles%20and%20urban%20fabrics%2C%20highlighting%20both%20similarities%20and%20contrasts.%20We%20observe%20significant%20disparities%20across%20continents%20and%20regions%2C%20particularly%20at%20larger%20scales.%20We%20give%20particular%20attention%20to%20exploring%20inter-city%20pattern%20similarities%20with%20new%20metrics%20we%20introduce%2C%20and%20uncover%20subgroups%20in%20each%20continent%2C%20unveiling%20the%20potential%20intercontinental%20dissemination%20of%20planning%20paradigms.%20We%20establish%20four%20modes%20of%20intra-city%20spatial%20distribution%20of%20patterns%20considering%20diversity%20and%20clustering.%20Notably%2C%20radial%20road%20networks%20are%20found%20to%20be%20positively%20correlated%20with%20GDP%20per%20capita%20and%20negatively%20correlated%20with%20PM2.5%20pollution.%20Our%20global%20study%20provides%20a%20new%20perspective%20to%20understand%20the%20URN%20texture%20of%20cities%2C%20which%20helps%20to%20understand%20the%20externalities%20of%20different%20road%20patterns%20and%20accordingly%20promote%20scientific%20and%20sustainable%20solutions%20for%20urban%20development.%22%2C%22date%22%3A%222024-01-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.landurbplan.2023.104901%22%2C%22ISSN%22%3A%220169-2046%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0169204623002207%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-06T19%3A54%3A34Z%22%7D%7D%2C%7B%22key%22%3A%227SHCXP77%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Xu%20et%20al.%22%2C%22parsedDate%22%3A%222022-10-03%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BXu%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2022.2048834%26%23039%3B%26gt%3BApplication%20of%20a%20graph%20convolutional%20network%20with%20visual%20and%20semantic%20features%20to%20classify%20urban%20scenes%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Application%20of%20a%20graph%20convolutional%20network%20with%20visual%20and%20semantic%20features%20to%20classify%20urban%20scenes%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yongyang%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shuai%22%2C%22lastName%22%3A%22Jin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhanlong%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xuejing%22%2C%22lastName%22%3A%22Xie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sheng%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhong%22%2C%22lastName%22%3A%22Xie%22%7D%5D%2C%22abstractNote%22%3A%22Urban%20scenes%20consist%20of%20visual%20and%20semantic%20features%20and%20exhibit%20spatial%20relationships%20among%20land-use%20types%20%28e.g.%20industrial%20areas%20are%20far%20away%20from%20the%20residential%20zones%29.%20This%20study%20applied%20a%20graph%20convolutional%20network%20with%20neighborhood%20information%20%28henceforth%2C%20named%20the%20neighbour%20supporting%20graph%20convolutional%20neural%20network%29%2C%20to%20learn%20spatial%20relationships%20for%20urban%20scene%20classification.%20Furthermore%2C%20a%20co-occurrence%20analysis%20with%20visual%20and%20semantic%20features%20proceeded%20to%20improve%20the%20accuracy%20of%20urban%20scene%20classification.%20We%20tested%20the%20proposed%20method%20with%20the%20fifth%20ring%20road%20of%20Beijing%20with%20an%20overall%20classification%20accuracy%20of%200.827%20and%20a%20Kappa%20coefficient%20of%200.769.%20In%20comparison%20with%20other%20methods%2C%20such%20as%20support%20vector%20machine%2C%20random%20forest%2C%20and%20general%20graph%20convolutional%20network%2C%20the%20case%20study%20showed%20that%20the%20proposed%20method%20improved%20about%2010%25%20in%20urban%20scene%20classification.%22%2C%22date%22%3A%222022-10-03%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2022.2048834%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2022.2048834%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A50%3A30Z%22%7D%7D%2C%7B%22key%22%3A%22VB5SVWZB%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yang%20et%20al.%22%2C%22parsedDate%22%3A%222022-10%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYang%2C%20M.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F10%5C%2F523%26%23039%3B%26gt%3BA%20Stacking%20Ensemble%20Learning%20Method%20to%20Classify%20the%20Patterns%20of%20Complex%20Road%20Junctions%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20Stacking%20Ensemble%20Learning%20Method%20to%20Classify%20the%20Patterns%20of%20Complex%20Road%20Junctions%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lingya%22%2C%22lastName%22%3A%22Cheng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Minjun%22%2C%22lastName%22%3A%22Cao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiongfeng%22%2C%22lastName%22%3A%22Yan%22%7D%5D%2C%22abstractNote%22%3A%22Recognizing%20the%20patterns%20of%20road%20junctions%20in%20a%20road%20network%20plays%20a%20crucial%20role%20in%20various%20applications.%20Owing%20to%20the%20diversity%20and%20complexity%20of%20morphologies%20of%20road%20junctions%2C%20traditional%20methods%20that%20rely%20heavily%20on%20manual%20settings%20of%20features%20and%20rules%20are%20often%20problematic.%20In%20recent%20years%2C%20several%20studies%20have%20employed%20convolutional%20neural%20networks%20%28CNNs%29%20to%20classify%20complex%20junctions.%20These%20methods%20usually%20convert%20vector-based%20junctions%20into%20raster%20representations%20with%20a%20predefined%20sampling%20area%20coverage.%20However%2C%20a%20fixed%20sampling%20area%20coverage%20cannot%20ensure%20the%20integrity%20and%20clarity%20of%20each%20junction%2C%20which%20inevitably%20leads%20to%20misclassification.%20To%20overcome%20this%20drawback%2C%20this%20study%20proposes%20a%20stacking%20ensemble%20learning%20method%20for%20classifying%20the%20patterns%20of%20complex%20road%20junctions.%20In%20this%20method%2C%20each%20junction%20is%20first%20converted%20into%20raster%20images%20with%20multiple%20area%20coverages.%20Subsequently%2C%20several%20CNN-based%20base-classifiers%20are%20trained%20using%20raster%20images%2C%20and%20they%20output%20the%20probabilities%20of%20the%20junction%20belonging%20to%20different%20patterns.%20Finally%2C%20a%20meta-classifier%20based%20on%20random%20forest%20is%20used%20to%20combine%20the%20outputs%20of%20the%20base-classifiers%20and%20learn%20to%20arrive%20at%20the%20final%20classification.%20Experimental%20results%20show%20that%20the%20proposed%20method%20can%20improve%20the%20classification%20accuracy%20for%20complex%20road%20junctions%20compared%20to%20existing%20CNN-based%20classifiers%20that%20are%20trained%20using%20raster%20representations%20of%20junctions%20with%20a%20fixed%20sampling%20area%20coverage.%22%2C%22date%22%3A%222022%5C%2F10%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi11100523%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F10%5C%2F523%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-26T18%3A48%3A45Z%22%7D%7D%2C%7B%22key%22%3A%22JVAQW9KR%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yang%20et%20al.%22%2C%22parsedDate%22%3A%222022-09%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYang%2C%20M.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F9%5C%2F461%26%23039%3B%26gt%3BPattern%20Recognition%20and%20Segmentation%20of%20Administrative%20Boundaries%20Using%20a%20One-Dimensional%20Convolutional%20Neural%20Network%20and%20Grid%20Shape%20Context%20Descriptor%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Pattern%20Recognition%20and%20Segmentation%20of%20Administrative%20Boundaries%20Using%20a%20One-Dimensional%20Convolutional%20Neural%20Network%20and%20Grid%20Shape%20Context%20Descriptor%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haoran%22%2C%22lastName%22%3A%22Huang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yiqi%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiongfeng%22%2C%22lastName%22%3A%22Yan%22%7D%5D%2C%22abstractNote%22%3A%22Recognizing%20morphological%20patterns%20in%20lines%20and%20segmenting%20them%20into%20homogeneous%20segments%20is%20critical%20for%20line%20generalization%20and%20other%20applications.%20Due%20to%20the%20excessive%20dependence%20on%20handcrafted%20features%20in%20existing%20methods%20and%20their%20insufficient%20consideration%20of%20contextual%20information%2C%20we%20propose%20a%20novel%20pattern%20recognition%20and%20segmentation%20method%20for%20lines%2C%20based%20on%20deep%20learning%20and%20shape%20context%20descriptors.%20In%20this%20method%2C%20a%20line%20is%20divided%20into%20a%20series%20of%20consecutive%20linear%20units%20of%20equal%20length%2C%20termed%20lixels.%20A%20grid%20shape%20context%20descriptor%20%28GSCD%29%20was%20designed%20to%20extract%20the%20contextual%20features%20for%20each%20lixel.%20A%20one-dimensional%20convolutional%20neural%20network%20%281D-U-Net%29%20was%20constructed%20to%20classify%20the%20pattern%20type%20of%20each%20lixel%2C%20and%20adjacent%20lixels%20with%20the%20same%20pattern%20types%20were%20fused%20to%20obtain%20segmentation%20results.%20The%20proposed%20method%20was%20applied%20to%20administrative%20boundaries%2C%20which%20were%20segmented%20into%20components%20with%20three%20different%20patterns.%20The%20experiments%20showed%20that%20the%20lixel%20classification%20accuracy%20of%20the%201D-U-Net%20reached%2090.42%25.%20The%20consistency%20ratio%20was%2092.41%25%2C%20when%20compared%20with%20the%20manual%20segmentation%20results%2C%20which%20was%20higher%20than%20either%20of%20the%20two%20existing%20machine%20learning-based%20segmentation%20methods.%22%2C%22date%22%3A%222022%5C%2F9%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi11090461%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F9%5C%2F461%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A53%3A36Z%22%7D%7D%2C%7B%22key%22%3A%22RARNYL5W%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yang%20et%20al.%22%2C%22parsedDate%22%3A%222022-06-03%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYang%2C%20M.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2021.2024195%26%23039%3B%26gt%3BDetecting%20interchanges%20in%20road%20networks%20using%20a%20graph%20convolutional%20network%20approach%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Detecting%20interchanges%20in%20road%20networks%20using%20a%20graph%20convolutional%20network%20approach%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chenjun%22%2C%22lastName%22%3A%22Jiang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiongfeng%22%2C%22lastName%22%3A%22Yan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tinghua%22%2C%22lastName%22%3A%22Ai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Minjun%22%2C%22lastName%22%3A%22Cao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenyuan%22%2C%22lastName%22%3A%22Chen%22%7D%5D%2C%22abstractNote%22%3A%22Detecting%20interchanges%20in%20road%20networks%20benefit%20many%20applications%2C%20such%20as%20vehicle%20navigation%20and%20map%20generalization.%20Traditional%20approaches%20use%20manually%20defined%20rules%20based%20on%20geometric%2C%20topological%2C%20or%20both%20properties%2C%20and%20thus%20can%20present%20challenges%20for%20structurally%20complex%20interchange.%20To%20overcome%20this%20drawback%2C%20we%20propose%20a%20graph-based%20deep%20learning%20approach%20for%20interchange%20detection.%20First%2C%20we%20model%20the%20road%20network%20as%20a%20graph%20in%20which%20the%20nodes%20represent%20road%20segments%2C%20and%20the%20edges%20represent%20their%20connections.%20The%20proposed%20approach%20computes%20the%20shape%20measures%20and%20contextual%20properties%20of%20individual%20road%20segments%20for%20features%20characterizing%20the%20associated%20nodes%20in%20the%20graph.%20Next%2C%20a%20semi-supervised%20approach%20uses%20these%20features%20and%20limited%20labeled%20interchanges%20to%20train%20a%20graph%20convolutional%20network%20that%20classifies%20these%20road%20segments%20into%20an%20interchange%20and%20non-interchange%20segments.%20Finally%2C%20an%20adaptive%20clustering%20approach%20groups%20the%20detected%20interchange%20segments%20into%20interchanges.%20Our%20experiment%20with%20the%20road%20networks%20of%20Beijing%20and%20Wuhan%20achieved%20a%20classification%20accuracy%20%26gt%3B95%25%20at%20a%20label%20rate%20of%2010%25.%20Moreover%2C%20the%20interchange%20detection%20precision%20and%20recall%20were%2079.6%20and%2075.7%25%20on%20the%20Beijing%20dataset%20and%2080.6%20and%2074.8%25%20on%20the%20Wuhan%20dataset%2C%20respectively%2C%20which%20were%2018.3%5Cu201336.1%20and%2017.4%5Cu201319.4%25%20higher%20than%20those%20of%20the%20existing%20approaches%20based%20on%20characteristic%20node%20clustering.%22%2C%22date%22%3A%222022-06-03%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2021.2024195%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2021.2024195%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A54%3A24Z%22%7D%7D%2C%7B%22key%22%3A%22FQB6WQ7T%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Chen%20et%20al.%22%2C%22parsedDate%22%3A%222021-11-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BChen%2C%20W.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0198971521001137%26%23039%3B%26gt%3BClassification%20of%20urban%20morphology%20with%20deep%20learning%3A%20Application%20on%20urban%20vitality%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Classification%20of%20urban%20morphology%20with%20deep%20learning%3A%20Application%20on%20urban%20vitality%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wangyang%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Abraham%20Noah%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Filip%22%2C%22lastName%22%3A%22Biljecki%22%7D%5D%2C%22abstractNote%22%3A%22There%20is%20a%20prevailing%20trend%20to%20study%20urban%20morphology%20quantitatively%20thanks%20to%20the%20growing%20accessibility%20to%20various%20forms%20of%20spatial%20big%20data%2C%20increasing%20computing%20power%2C%20and%20use%20cases%20benefiting%20from%20such%20information.%20The%20methods%20developed%20up%20to%20now%20measure%20urban%20morphology%20with%20numerical%20indices%20describing%20density%2C%20proportion%2C%20and%20mixture%2C%20but%20they%20do%20not%20directly%20represent%20morphological%20features%20from%20the%20human%26%23039%3Bs%20visual%20and%20intuitive%20perspective.%20We%20take%20the%20first%20step%20to%20bridge%20the%20gap%20by%20proposing%20a%20deep%20learning-based%20technique%20to%20automatically%20classify%20road%20networks%20into%20four%20classes%20on%20a%20visual%20basis.%20The%20method%20is%20implemented%20by%20generating%20an%20image%20of%20the%20street%20network%20%28Colored%20Road%20Hierarchy%20Diagram%29%2C%20which%20we%20introduce%20in%20this%20paper%2C%20and%20classifying%20it%20using%20a%20deep%20convolutional%20neural%20network%20%28ResNet-34%29.%20The%20model%20achieves%20an%20overall%20classification%20accuracy%20of%200.875.%20Nine%20cities%20around%20the%20world%20are%20selected%20as%20the%20study%20areas%20with%20their%20road%20networks%20acquired%20from%20OpenStreetMap.%20Latent%20subgroups%20among%20the%20cities%20are%20uncovered%20through%20clustering%20on%20the%20percentage%20of%20each%20road%20network%20category.%20In%20the%20subsequent%20part%20of%20the%20paper%2C%20we%20focus%20on%20the%20usability%20of%20such%20classification%3A%20we%20apply%20our%20method%20in%20a%20case%20study%20of%20urban%20vitality%20prediction.%20An%20advanced%20tree-based%20regression%20model%20%28LightGBM%29%20is%20for%20the%20first%20time%20designated%20to%20establish%20the%20relationship%20between%20morphological%20indices%20and%20vitality%20indicators.%20The%20effect%20of%20road%20network%20classification%20is%20found%20to%20be%20small%20but%20positively%20associated%20with%20urban%20vitality.%20This%20work%20expands%20the%20toolkit%20of%20quantitative%20urban%20morphology%20study%20with%20new%20techniques%2C%20supporting%20further%20studies%20in%20the%20future.%22%2C%22date%22%3A%222021-11-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.compenvurbsys.2021.101706%22%2C%22ISSN%22%3A%220198-9715%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0198971521001137%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-10-17T18%3A26%3A49Z%22%7D%7D%2C%7B%22key%22%3A%227AWW6QA5%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Djouvas%20et%20al.%22%2C%22parsedDate%22%3A%222021-11%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BDjouvas%2C%20C.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9610820%26%23039%3B%26gt%3BAutomating%20road%20junction%20identification%20using%20Crowdsourcing%20and%20Machine%20Learning%20on%20GPS%20transformed%20data%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Automating%20road%20junction%20identification%20using%20Crowdsourcing%20and%20Machine%20Learning%20on%20GPS%20transformed%20data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Constantinos%22%2C%22lastName%22%3A%22Djouvas%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ioannis%22%2C%22lastName%22%3A%22Despotis%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Christos%22%2C%22lastName%22%3A%22Christodoulou%22%7D%5D%2C%22abstractNote%22%3A%22Identifying%20road%20junctions%20is%20of%20great%20importance%20for%20a%20number%20of%20applications%20that%20utilize%20electronic%20maps%2C%20like%20navigation%20systems.%20State%20of%20the%20art%20research%20on%20this%20area%20utilizes%20aerial%20images%20%28usually%20captured%20by%20satellites%29%2C%20on%20which%20different%20image%20processing%20techniques%20are%20applied%20for%20automatically%20identifying%20road%20junctions.%20In%20this%20work%2C%20we%20propose%20a%20radical%20new%20approach%20to%20solve%20this%20problem.%20Instead%20of%20images%2C%20we%20propose%20an%20approach%20that%20relies%20on%20transformed%20Global%20Positioning%20System%20%28GPS%29%20data%20collected%20and%20analyzed%20using%20big%20data%20techniques.%20In%20particular%2C%20we%20apply%20machine%20learning%20on%20Crowdsource%20collected%20and%20annotated%20GPS%20data%20for%20automatically%20identifying%20junctions.%20Results%20suggest%20that%20the%20proposed%20technique%20is%20extremely%20effective.%20Furthermore%2C%20it%20is%20shown%20that%20it%20can%20be%20effective%20for%20solving%20the%20limitations%20that%20current%20approaches%20have.%22%2C%22date%22%3A%222021-11%22%2C%22proceedingsTitle%22%3A%222021%2016th%20International%20Workshop%20on%20Semantic%20and%20Social%20Media%20Adaptation%20%26%20Personalization%20%28SMAP%29%22%2C%22conferenceName%22%3A%222021%2016th%20International%20Workshop%20on%20Semantic%20and%20Social%20Media%20Adaptation%20%26%20Personalization%20%28SMAP%29%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FSMAP53521.2021.9610820%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9610820%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A18%3A34Z%22%7D%7D%2C%7B%22key%22%3A%22LZGC6FUF%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Kuo%20and%20Tsai%22%2C%22parsedDate%22%3A%222021-06%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKuo%2C%20C.-L.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F10%5C%2F6%5C%2F377%26%23039%3B%26gt%3BRoad%20Characteristics%20Detection%20Based%20on%20Joint%20Convolutional%20Neural%20Networks%20with%20Adaptive%20Squares%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Road%20Characteristics%20Detection%20Based%20on%20Joint%20Convolutional%20Neural%20Networks%20with%20Adaptive%20Squares%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chiao-Ling%22%2C%22lastName%22%3A%22Kuo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ming-Hua%22%2C%22lastName%22%3A%22Tsai%22%7D%5D%2C%22abstractNote%22%3A%22The%20importance%20of%20road%20characteristics%20has%20been%20highlighted%2C%20as%20road%20characteristics%20are%20fundamental%20structures%20established%20to%20support%20many%20transportation-relevant%20services.%20However%2C%20there%20is%20still%20huge%20room%20for%20improvement%20in%20terms%20of%20types%20and%20performance%20of%20road%20characteristics%20detection.%20With%20the%20advantage%20of%20geographically%20tiled%20maps%20with%20high%20update%20rates%2C%20remarkable%20accessibility%2C%20and%20increasing%20availability%2C%20this%20paper%20proposes%20a%20novel%20simple%20deep-learning-based%20approach%2C%20namely%20joint%20convolutional%20neural%20networks%20%28CNNs%29%20adopting%20adaptive%20squares%20with%20combination%20rules%20to%20detect%20road%20characteristics%20from%20roadmap%20tiles.%20The%20proposed%20joint%20CNNs%20are%20responsible%20for%20the%20foreground%20and%20background%20image%20classification%20and%20various%20types%20of%20road%20characteristics%20classification%20from%20previous%20foreground%20images%2C%20raising%20detection%20accuracy.%20The%20adaptive%20squares%20with%20combination%20rules%20help%20efficiently%20focus%20road%20characteristics%2C%20augmenting%20the%20ability%20to%20detect%20them%20and%20provide%20optimal%20detection%20results.%20Five%20types%20of%20road%20characteristics%5Cu2014crossroads%2C%20T-junctions%2C%20Y-junctions%2C%20corners%2C%20and%20curves%5Cu2014are%20exploited%2C%20and%20experimental%20results%20demonstrate%20successful%20outcomes%20with%20outstanding%20performance%20in%20reality.%20The%20information%20of%20exploited%20road%20characteristics%20with%20location%20and%20type%20is%2C%20thus%2C%20converted%20from%20human-readable%20to%20machine-readable%2C%20the%20results%20will%20benefit%20many%20applications%20like%20feature%20point%20reminders%2C%20road%20condition%20reports%2C%20or%20alert%20detection%20for%20users%2C%20drivers%2C%20and%20even%20autonomous%20vehicles.%20We%20believe%20this%20approach%20will%20also%20enable%20a%20new%20path%20for%20object%20detection%20and%20geospatial%20information%20extraction%20from%20valuable%20map%20tiles.%22%2C%22date%22%3A%222021%5C%2F6%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi10060377%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F10%5C%2F6%5C%2F377%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A04%3A45Z%22%7D%7D%2C%7B%22key%22%3A%22PD5PUNDZ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Touya%20and%20Lokhat%22%2C%22parsedDate%22%3A%222020-04-13%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BTouya%2C%20G.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3382080%26%23039%3B%26gt%3BDeep%20Learning%20for%20Enrichment%20of%20Vector%20Spatial%20Databases%3A%20Application%20to%20Highway%20Interchange%26lt%3B%5C%2Fa%26gt%3B.%202020%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep%20Learning%20for%20Enrichment%20of%20Vector%20Spatial%20Databases%3A%20Application%20to%20Highway%20Interchange%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guillaume%22%2C%22lastName%22%3A%22Touya%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Imran%22%2C%22lastName%22%3A%22Lokhat%22%7D%5D%2C%22abstractNote%22%3A%22Spatial%20analysis%20and%20pattern%20recognition%20with%20vector%20spatial%20data%20is%20particularly%20useful%20to%20enrich%20raw%20data.%20In%20road%20networks%2C%20for%20instance%2C%20there%20are%20many%20patterns%20and%20structures%20that%20are%20implicit%20with%20only%20road%20line%20features%2C%20among%20which%20highway%20interchange%20appeared%20very%20complex%20to%20recognize%20with%20vector-based%20techniques.%20The%20goal%20is%20to%20find%20the%20roads%20that%20belong%20to%20an%20interchange%2C%20such%20as%20the%20slip%20roads%20and%20the%20highway%20roads%20connected%20to%20the%20slip%20roads.%20To%20go%20further%20than%20state-of-the-art%20vector-based%20techniques%2C%20this%20article%20proposes%20to%20use%20raster-based%20deep%20learning%20techniques%20to%20recognize%20highway%20interchanges.%20The%20contribution%20of%20this%20work%20is%20to%20study%20how%20to%20optimally%20convert%20vector%20data%20into%20small%20images%20suitable%20for%20state-of-the-art%20deep%20learning%20models.%20Image%20classification%20with%20a%20convolutional%20neural%20network%20%28i.e.%2C%20is%20there%20an%20interchange%20in%20this%20image%20or%20not%3F%29%20and%20image%20segmentation%20with%20a%20u-net%20%28i.e.%2C%20find%20the%20pixels%20that%20cover%20the%20interchange%29%20are%20experimented%20and%20give%20better%20results%20than%20existing%20vector-based%20techniques%20in%20this%20specific%20use%20case%20%2899.5%25%20against%2074%25%29.%22%2C%22date%22%3A%22April%2013%2C%202020%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3382080%22%2C%22ISSN%22%3A%222374-0353%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3382080%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A44%3A10Z%22%7D%7D%2C%7B%22key%22%3A%22WULHSDZL%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20et%20al.%22%2C%22parsedDate%22%3A%222020%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLi%2C%20C.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1111%5C%2Ftgis.12681%26%23039%3B%26gt%3BA%20complex%20junction%20recognition%20method%20based%20on%20GoogLeNet%20model%26lt%3B%5C%2Fa%26gt%3B.%202020%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20complex%20junction%20recognition%20method%20based%20on%20GoogLeNet%20model%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chengming%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Honggang%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pengda%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yong%22%2C%22lastName%22%3A%22Yin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sichao%22%2C%22lastName%22%3A%22Liu%22%7D%5D%2C%22abstractNote%22%3A%22Complex%20junctions%20are%20typical%20microstructures%20in%20large-scale%20road%20networks%20with%20intricate%20structures%20and%20varied%20morphologies.%20It%20is%20a%20challenge%20to%20identify%20junctions%20in%20map%20generalization%20and%20car%20navigation%20tasks%20accurately.%20Generally%2C%20traditional%20recognition%20methods%20rely%20on%20low-level%20characteristics%20of%20manual%20design%2C%20such%20as%20parallelism%20and%20symmetry.%20In%20recent%20years%2C%20preliminary%20studies%20using%20deep%20learning-based%20recognition%20methods%20were%20conducted.%20However%2C%20only%20a%20few%20junction%20types%20can%20be%20recognized%20by%20existing%20methods%2C%20and%20these%20methods%20cannot%20effectively%20identify%20junctions%20with%20irregular%20shapes%20and%20numerous%20interference%20sections.%20Hence%2C%20this%20article%20proposes%20a%20complex%20junction%20recognition%20method%20based%20on%20the%20GoogLeNet%20model.%20First%2C%20the%20Delaunay%20triangulation%20clustering%20algorithm%20was%20used%20to%20automatically%20identify%20the%20center%20point%20and%20spatial%20range%20of%20training%20samples%20for%20complex%20junctions.%20Second%2C%20vector%20training%20samples%20were%20selected%20from%20OpenStreetMap%20%28OSM%29%20data%20of%2039%20cities%20across%20China%2C%20and%20the%20samples%20were%20then%20augmented%20through%20simplification%2C%20rotation%2C%20and%20mirroring.%20Finally%2C%20the%20vector%20sample%20data%20were%20transformed%20into%20raster%20images%2C%20and%20the%20GoogLeNet%20model%20was%20trained%20to%20learn%20the%20high-level%20fuzzy%20characteristics.%20Experiments%20based%20on%20OSM%20data%20from%20Tianjin%20city%2C%20China%2C%20revealed%20that%20compared%20with%20state-of-the-art%20methods%2C%20the%20proposed%20method%20effectively%20identified%20more%20types%20of%20complex%20junctions%20and%20achieved%20a%20significantly%20higher%20identification%20accuracy.%20Furthermore%2C%20the%20proposed%20method%20has%20strong%20generalizability%20and%20anti-interference%20capability.%22%2C%22date%22%3A%222020%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1111%5C%2Ftgis.12681%22%2C%22ISSN%22%3A%221467-9671%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1111%5C%2Ftgis.12681%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A05%3A52Z%22%7D%7D%2C%7B%22key%22%3A%22M9ZB9A9C%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20et%20al.%22%2C%22parsedDate%22%3A%222019-09%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLi%2C%20H.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F8%5C%2F9%5C%2F421%26%23039%3B%26gt%3BAutomatic%20Identification%20of%20Overpass%20Structures%3A%20A%20Method%20of%20Deep%20Learning%26lt%3B%5C%2Fa%26gt%3B.%202019%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Automatic%20Identification%20of%20Overpass%20Structures%3A%20A%20Method%20of%20Deep%20Learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hao%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Maosheng%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Youxin%22%2C%22lastName%22%3A%22Huang%22%7D%5D%2C%22abstractNote%22%3A%22The%20identification%20of%20overpass%20structures%20in%20road%20networks%20has%20great%20significance%20for%20multi-scale%20modeling%20of%20roads%2C%20congestion%20analysis%2C%20and%20vehicle%20navigation.%20The%20traditional%20vector-based%20methods%20identify%20overpasses%20by%20the%20methodologies%20coming%20from%20computational%20geometry%20and%20graph%20theory%2C%20and%20they%20overly%20rely%20on%20the%20artificially%20designed%20features%20and%20have%20poor%20adaptability%20to%20complex%20scenes.%20This%20paper%20presents%20a%20novel%20method%20of%20identifying%20overpasses%20based%20on%20a%20target%20detection%20model%20%28Faster-RCNN%29.%20This%20method%20utilizes%20raster%20representation%20of%20vector%20data%20and%20convolutional%20neural%20networks%20%28CNNs%29%20to%20learn%20task%20adaptive%20features%20from%20raster%20data%2C%20then%20identifies%20the%20location%20of%20an%20overpass%20by%20a%20Region%20Proposal%20network%20%28RPN%29.%20The%20contribution%20of%20this%20paper%20is%3A%20%281%29%20An%20overpass%20labelling%20geodatabase%20%28OLGDB%29%20for%20the%20OpenStreetMap%20%28OSM%29%20road%20network%20data%20of%20six%20typical%20cities%20in%20China%20is%20established%3B%20%282%29%20Three%20different%20CNNs%20%28ZF-net%2C%20VGG-16%2C%20Inception-ResNet%20V2%29%20are%20integrated%20into%20Faster-RCNN%20and%20evaluated%20by%20accuracy%20performance%3B%20%283%29%20The%20optimal%20combination%20of%20learning%20rate%20and%20batchsize%20is%20determined%20by%20fine-tuning%3B%20and%20%284%29%20Five%20geometric%20metrics%20%28perimeter%2C%20area%2C%20squareness%2C%20circularity%2C%20and%20W%5C%2FL%29%20are%20synthetized%20into%20image%20bands%20to%20enhance%20the%20training%20data%2C%20and%20their%20contribution%20to%20the%20overpass%20identification%20task%20is%20determined.%20The%20experimental%20results%20have%20shown%20that%20the%20proposed%20method%20has%20good%20accuracy%20performance%20%28around%2090%25%29%2C%20and%20could%20be%20improved%20with%20the%20expansion%20of%20OLGDB%20and%20switching%20to%20more%20sophisticated%20target%20detection%20models.%20The%20deep%20learning%20target%20detection%20model%20has%20great%20application%20potential%20in%20large-scale%20road%20network%20pattern%20recognition%2C%20it%20can%20task-adaptively%20learn%20road%20structure%20features%20and%20easily%20extend%20to%20other%20road%20network%20patterns.%22%2C%22date%22%3A%222019%5C%2F9%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi8090421%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F8%5C%2F9%5C%2F421%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A06%3A29Z%22%7D%7D%2C%7B%22key%22%3A%228H4V3EUP%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22He%20and%20Qian%22%2C%22parsedDate%22%3A%222018-03-20%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BHe%20H.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Fxb.chinasmp.com%5C%2FCN%5C%2F10.11947%5C%2Fj.AGCS.2018.20170265%26%23039%3B%26gt%3BInterchange%20Recognition%20Method%20Based%20on%20CNN%26lt%3B%5C%2Fa%26gt%3B.%202018%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Interchange%20Recognition%20Method%20Based%20on%20CNN%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haiwei%22%2C%22lastName%22%3A%22He%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haizhong%22%2C%22lastName%22%3A%22Qian%22%7D%5D%2C%22abstractNote%22%3A%22The%20identification%20and%20classification%20of%20interchange%20structures%20in%20OSM%20data%20can%20provide%20important%20information%20for%20the%20construction%20of%20multi-scale%20model%2C%20navigation%20and%20location%20services%2C%20congestion%20analysis%2C%20etc.%20The%20traditional%20method%20of%20interchange%20identification%20relies%20on%20the%20low-level%20characteristics%20of%20artificial%20design%2C%20and%20cannot%20distinguish%20the%20complex%20interchange%20structure%20with%20interference%20section%20effectively.%20In%20this%20paper%2C%20a%20new%20method%20based%20on%20convolutional%20neural%20network%20for%20identification%20of%20the%20interchange%20is%20proposed.%20The%20method%20combines%20vector%20data%20with%20raster%20image%2C%20and%20uses%20neural%20network%20to%20learn%20the%20fuzzy%20characteristics%20of%20the%20interchange%2C%20and%20classifies%20the%20complex%20interchange%20structure%20in%20OSM.%20Experiments%20show%20that%20this%20method%20has%20strong%20anti-interference%2C%20and%20has%20achieved%20good%20results%20in%20the%20classification%20of%20complex%20interchange%20shape%2C%20and%20there%20is%20room%20for%20further%20improvement%20with%20the%20expansion%20of%20the%20case%20base%20and%20the%20optimization%20of%20neural%20network%20model.%22%2C%22date%22%3A%222018-03-20%22%2C%22language%22%3A%22zh%22%2C%22DOI%22%3A%2210.11947%5C%2Fj.AGCS.2018.20170265%22%2C%22ISSN%22%3A%221001-1595%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fxb.chinasmp.com%5C%2FCN%5C%2F10.11947%5C%2Fj.AGCS.2018.20170265%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-07-31T16%3A28%3A33Z%22%7D%7D%5D%7D
Li, P. et al. MultiLineStringNet: a deep neural network for linear feature set recognition. 2024
Yang, M. et al. A Stacking Ensemble Learning Method to Classify the Patterns of Complex Road Junctions. 2022
Yang, M. et al. Detecting interchanges in road networks using a graph convolutional network approach. 2022
Chen, W. et al. Classification of urban morphology with deep learning: Application on urban vitality. 2021
Touya, G. et al. Deep Learning for Enrichment of Vector Spatial Databases: Application to Highway Interchange. 2020
Li, C. et al. A complex junction recognition method based on GoogLeNet model. 2020
Li, H. et al. Automatic Identification of Overpass Structures: A Method of Deep Learning. 2019
He H. et al. Interchange Recognition Method Based on CNN. 2018
Pattern Detection (Polygons)
5447768
pattern detection, areas
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22I2Z92Q7W%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Liu%20et%20al.%22%2C%22parsedDate%22%3A%222025-12-31%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLiu%2C%20T.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10106049.2024.2436906%26%23039%3B%26gt%3BRecognition%20of%20building%20group%20patterns%20using%20GCN%20and%20knowledge%20graph%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Recognition%20of%20building%20group%20patterns%20using%20GCN%20and%20knowledge%20graph%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tao%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ziqiang%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ping%22%2C%22lastName%22%3A%22Du%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenning%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haowen%22%2C%22lastName%22%3A%22Yan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bo%22%2C%22lastName%22%3A%22Qiang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shenglu%22%2C%22lastName%22%3A%22Xu%22%7D%5D%2C%22abstractNote%22%3A%22Effective%20identification%20of%20building%20patterns%20can%20improve%20the%20quality%20of%20automated%20map%20generalization.%20Established%20methods%20are%20limited%20by%20complex%20rule%20definitions%20and%20cannot%20fully%20consider%20the%20local%20domain%20information%20of%20buildings%20patterns%2C%20and%20few%20studies%20have%20focused%20on%20recognizing%20multiple%20building%20group%20patterns%20using%20GCN.%20Therefore%2C%20we%20propose%20a%20new%20model%20for%20recognizing%20building%20patterns%20by%20combining%20knowledge%20graph%20methods%20and%20GCN%20methods.%20First%2C%20the%20features%20of%20individual%20buildings%20are%20acquired%2C%20then%20the%20graph%20structure%20of%20building%20is%20constructed%2C%20and%20finally%2C%20by%20means%20of%20GCN%20and%20knowledge%20embedding%2C%20the%20features%20of%20buildings%20are%20efficiently%20learned%20on%20the%20basis%20of%20the%20graph%20structure%20of%20building%20patterns%2C%20and%20the%20pattern%20features%20of%20building%20are%20extracted%2C%20so%20as%20to%20realize%20the%20recognition%20of%20building%20patterns.%20The%20results%20show%20that%20the%20training%20accuracy%20and%20testing%20accuracy%20reach%2099.03%25%20and%2095.89%25%2C%20respectively.%20Compared%20with%20other%20methods%2C%20the%20proposed%20model%20can%20effectively%20utilize%20the%20local%20spatial%20information%20of%20building%20patterns%20and%20accurately%20recognize%20building%20patterns.%22%2C%22date%22%3A%222025-12-31%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F10106049.2024.2436906%22%2C%22ISSN%22%3A%221010-6049%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10106049.2024.2436906%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-26T19%3A08%3A27Z%22%7D%7D%2C%7B%22key%22%3A%22I8PYEHYL%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Cui%20et%20al.%22%2C%22parsedDate%22%3A%222025-12-31%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BCui%2C%20L.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10106049.2025.2471087%26%23039%3B%26gt%3BContrastive%20learning%20for%20one-shot%20building%20shape%20recognition%20using%20vector%20polygon%20transformers%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Contrastive%20learning%20for%20one-shot%20building%20shape%20recognition%20using%20vector%20polygon%20transformers%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Longfei%22%2C%22lastName%22%3A%22Cui%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haizhong%22%2C%22lastName%22%3A%22Qian%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Junkui%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chao%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xinyu%22%2C%22lastName%22%3A%22Niu%22%7D%5D%2C%22abstractNote%22%3A%22Accurate%20building%20shape%20recognition%20is%20essential%20for%20cartographic%20generalization%2C%20urban%20planning%2C%20and%20geographic%20analysis%2C%20but%20existing%20methods%20struggle%20with%20numerous%20categories%20and%20limited%20samples.%20This%20paper%20presents%20a%20novel%20contrastive%20learning-based%20method%20for%20building%20shape%20recognition%20that%20improves%20accuracy%20and%20broadens%20recognizable%20types%20in%20one-shot%20scenarios.%20It%20employs%20a%20vector%20polygon%20transformer%20deep%20neural%20network%20%28VPT-DNN%29%20for%20automatic%20feature%20extraction%2C%20avoiding%20manual%20calculation.%20By%20employing%20contrastive%20learning%2C%20the%20model%20effectively%20distinguishes%20between%20various%20building%20shapes%20in%20an%20unsupervised%20manner%2C%20requiring%20only%20a%20single%20labeled%20sample%20per%20shape%20category.%20Experiments%20demonstrate%2085.1%25%20average%20accuracy%20on%20a%2010-category%20dataset%2C%20surpassing%20existing%20few-shot%20methods.%20The%20model%20generalizes%20effectively%2C%20achieving%2087.9%25%20accuracy%20on%20an%20unseen%20European%20dataset%20without%20retraining.%20The%20adoption%20of%20this%20methodology%20reduces%20costs%20associated%20with%20manual%20operations%20and%20enhances%20the%20overall%20process%20of%20building%20recognition%20and%20classification.%22%2C%22date%22%3A%222025-12-31%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F10106049.2025.2471087%22%2C%22ISSN%22%3A%221010-6049%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10106049.2025.2471087%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-19T18%3A49%3A12Z%22%7D%7D%2C%7B%22key%22%3A%22DINR5S74%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Huang%20et%20al.%22%2C%22parsedDate%22%3A%222025-06-16%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BHuang%2C%20Z.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs10707-025-00554-y%26%23039%3B%26gt%3BLearning%20geometric%20invariant%20features%20for%20classification%20of%20vector%20polygons%20with%20graph%20message-passing%20neural%20network%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Learning%20geometric%20invariant%20features%20for%20classification%20of%20vector%20polygons%20with%20graph%20message-passing%20neural%20network%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zexian%22%2C%22lastName%22%3A%22Huang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kourosh%22%2C%22lastName%22%3A%22Khoshelham%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Martin%22%2C%22lastName%22%3A%22Tomko%22%7D%5D%2C%22abstractNote%22%3A%22Geometric%20shape%20classification%20of%20vector%20polygons%20remains%20a%20challenging%20task%20in%20spatial%20analysis.%20Previous%20studies%20have%20primarily%20focused%20on%20deep%20learning%20approaches%20for%20rasterized%20vector%20polygons%2C%20while%20the%20study%20of%20discrete%20polygon%20representations%20and%20corresponding%20learning%20methods%20remains%20underexplored.%20In%20this%20study%2C%20we%20investigate%20a%20graph-based%20representation%20of%20vector%20polygons%20and%20propose%20a%20simple%20graph%20message-passing%20framework%2C%20PolyMP%2C%20along%20with%20its%20densely%20self-connected%20variant%2C%20PolyMP-DSC%2C%20to%20learn%20more%20expressive%20and%20robust%20latent%20representations%20of%20polygons.%20This%20framework%20hierarchically%20captures%20self-looped%20graph%20information%20and%20learns%20geometric-invariant%20features%20for%20polygon%20shape%20classification.%20Through%20extensive%20experiments%2C%20we%20demonstrate%20that%20combining%20a%20permutation-invariant%20graph%20message-passing%20neural%20network%20with%20a%20densely%20self-connected%20mechanism%20achieves%20robust%20performance%20on%20benchmark%20datasets%2C%20including%20synthetic%20glyphs%20and%20real-world%20building%20footprints%2C%20outperforming%20several%20baseline%20methods.%20Our%20findings%20indicate%20that%20PolyMP%20and%20PolyMP-DSC%20effectively%20capture%20expressive%20geometric%20features%20that%20remain%20invariant%20under%20common%20transformations%2C%20such%20as%20translation%2C%20rotation%2C%20scaling%2C%20and%20shearing%2C%20while%20also%20being%20robust%20to%20trivial%20vertex%20removals.%20Furthermore%2C%20we%20highlight%20the%20strong%20generalization%20ability%20of%20the%20proposed%20approach%2C%20enabling%20the%20transfer%20of%20learned%20geometric%20features%20from%20synthetic%20glyph%20polygons%20to%20real-world%20building%20footprints.%22%2C%22date%22%3A%222025-06-16%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2Fs10707-025-00554-y%22%2C%22ISSN%22%3A%221573-7624%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs10707-025-00554-y%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-26T19%3A17%3A41Z%22%7D%7D%2C%7B%22key%22%3A%22AUMIVHM6%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Cui%20et%20al.%22%2C%22parsedDate%22%3A%222025-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BCui%2C%20L.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2076-3417%5C%2F15%5C%2F5%5C%2F2383%26%23039%3B%26gt%3BA%20Transformer-Based%20Approach%20for%20Efficient%20Geometric%20Feature%20Extraction%20from%20Vector%20Shape%20Data%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20Transformer-Based%20Approach%20for%20Efficient%20Geometric%20Feature%20Extraction%20from%20Vector%20Shape%20Data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Longfei%22%2C%22lastName%22%3A%22Cui%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xinyu%22%2C%22lastName%22%3A%22Niu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haizhong%22%2C%22lastName%22%3A%22Qian%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiao%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Junkui%22%2C%22lastName%22%3A%22Xu%22%7D%5D%2C%22abstractNote%22%3A%22The%20extraction%20of%20shape%20features%20from%20vector%20elements%20is%20essential%20in%20cartography%20and%20geographic%20information%20science%2C%20supporting%20a%20range%20of%20intelligent%20processing%20tasks.%20Traditional%20methods%20rely%20on%20different%20machine%20learning%20algorithms%20tailored%20to%20specific%20types%20of%20line%20and%20polygon%20elements%2C%20limiting%20their%20general%20applicability.%20This%20study%20introduces%20a%20novel%20approach%20called%20%5Cu201cPre-Trained%20Shape%20Feature%20Representations%20from%20Transformers%20%28PSRT%29%5Cu201d%2C%20which%20utilizes%20transformer%20encoders%20designed%20with%20three%20self-supervised%20pre-training%20tasks%3A%20coordinate%20masking%20prediction%2C%20coordinate%20offset%20correction%2C%20and%20coordinate%20sequence%20rearrangement.%20This%20approach%20enables%20the%20extraction%20of%20general%20shape%20features%20applicable%20to%20both%20line%20and%20polygon%20elements%2C%20generating%20high-dimensional%20embedded%20feature%20vectors.%20These%20vectors%20facilitate%20downstream%20tasks%20like%20shape%20classification%2C%20pattern%20recognition%2C%20and%20cartographic%20generalization.%20Our%20experimental%20results%20show%20that%20PSRT%20can%20extract%20vector%20shape%20features%20effectively%20without%20needing%20labeled%20samples%20and%20is%20adaptable%20to%20various%20types%20of%20vector%20features.%20Compared%20to%20the%20methods%20without%20pre-training%2C%20PSRT%20enhances%20training%20efficiency%20by%20over%20five%20times%20and%20improves%20accuracy%20by%205%5Cu201310%25%20in%20tasks%20such%20as%20line%20element%20matching%20and%20polygon%20shape%20classification.%20This%20innovative%20approach%20offers%20a%20more%20unified%2C%20efficient%20solution%20for%20processing%20vector%20shape%20data%20across%20different%20applications.%22%2C%22date%22%3A%222025%5C%2F1%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fapp15052383%22%2C%22ISSN%22%3A%222076-3417%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2076-3417%5C%2F15%5C%2F5%5C%2F2383%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-19T18%3A56%3A59Z%22%7D%7D%2C%7B%22key%22%3A%22K2WQQH8M%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhang%20et%20al.%22%2C%22parsedDate%22%3A%222024-12-31%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZhang%2C%20F.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F08839514.2024.2439611%26%23039%3B%26gt%3BEnhancing%20the%20Recognition%20of%20Collinear%20Building%20Patterns%20by%20Shape%20Cognition%20Based%20on%20Graph%20Neural%20Networks%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Enhancing%20the%20Recognition%20of%20Collinear%20Building%20Patterns%20by%20Shape%20Cognition%20Based%20on%20Graph%20Neural%20Networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fubing%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qun%22%2C%22lastName%22%3A%22Sun%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenjun%22%2C%22lastName%22%3A%22Huang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Youneng%22%2C%22lastName%22%3A%22Su%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jingzhen%22%2C%22lastName%22%3A%22Ma%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ruixing%22%2C%22lastName%22%3A%22Xing%22%7D%5D%2C%22abstractNote%22%3A%22Building%20patterns%20are%20important%20components%20of%20urban%20structures%20and%20functions%2C%20and%20their%20accurate%20recognition%20is%20the%20foundation%20of%20urban%20spatial%20analysis%2C%20cartographic%20generalization%2C%20and%20other%20tasks.%20Current%20building%20pattern%20recognition%20methods%20are%20often%20based%20on%20a%20shape%20index%20that%20can%20only%20characterize%20shape%20features%20from%20one%20aspect%2C%20resulting%20in%20significant%20errors.%20In%20this%20study%2C%20a%20building%20pattern%20recognition%20method%20based%20on%20a%20graph%20neural%20network%20is%20proposed%20to%20enhance%20shape%20cognition%20and%20focus%20on%20recognizing%20collinear%20patterns.%20First%2C%20a%20building%20shape%20classification%20model%20that%20integrates%20global%20shape%20and%20graph%20node%20structure%20features%20was%20constructed%20to%20quantitatively%20study%20shape%20cognition.%20Subsequently%2C%20a%20collinear%20pattern%20recognition%20%28CPR%29%20model%20was%20established%20based%20on%20a%20dual%20building%20graph.%20The%20shape%20cognition%20results%20were%20integrated%20into%20the%20model%20to%20enhance%20its%20recognition%20ability.%20The%20results%20show%20that%20the%20shape%20classification%20model%20can%20be%20used%20to%20effectively%20distinguish%20different%20shape%20categories%20and%20support%20building%20pattern%20recognition%20tasks.%20Based%20on%20the%20CPR%20model%2C%20false%20recognitions%20can%20be%20avoided%2C%20and%20recognition%20results%20similar%20to%20those%20of%20visual%20cognition%20can%20be%20obtained.%20Compared%20with%20the%20comparative%20methods%2C%20both%20models%20have%20significant%20advantages%20in%20terms%20of%20statistical%20results%20and%20implementation.%22%2C%22date%22%3A%222024-12-31%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1080%5C%2F08839514.2024.2439611%22%2C%22ISSN%22%3A%220883-9514%2C%201087-6545%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F08839514.2024.2439611%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-12T12%3A14%3A11Z%22%7D%7D%2C%7B%22key%22%3A%22R8NYSKGJ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wang%20et%20al.%22%2C%22parsedDate%22%3A%222024-12%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWang%2C%20X.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F13%5C%2F12%5C%2F433%26%23039%3B%26gt%3BRecognition%20and%20Classification%20of%20Typical%20Building%20Shapes%20Based%20on%20YOLO%20Object%20Detection%20Models%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Recognition%20and%20Classification%20of%20Typical%20Building%20Shapes%20Based%20on%20YOLO%20Object%20Detection%20Models%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiao%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haizhong%22%2C%22lastName%22%3A%22Qian%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Limin%22%2C%22lastName%22%3A%22Xie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xu%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bohao%22%2C%22lastName%22%3A%22Li%22%7D%5D%2C%22abstractNote%22%3A%22The%20recognition%20and%20classification%20of%20building%20shapes%20are%20the%20prerequisites%20and%20foundation%20for%20building%20simplification%2C%20matching%2C%20and%20change%20detection%2C%20which%20have%20always%20been%20important%20research%20problems%20in%20the%20field%20of%20cartographic%20generalization.%20Due%20to%20the%20ambiguity%20and%20uncertainty%20of%20building%20shape%20outlines%2C%20it%20is%20difficult%20to%20describe%20them%20using%20unified%20rules%2C%20which%20has%20always%20limited%20the%20quality%20and%20automation%20level%20of%20building%20shape%20recognition.%20In%20response%20to%20the%20above%20issues%2C%20by%20introducing%20object%20detection%20technology%20in%20computer%20vision%2C%20this%20article%20proposes%20a%20building%20shape%20recognition%20and%20classification%20method%20based%20on%20the%20YOLO%20object%20detection%20model.%20Firstly%2C%20for%20different%20types%20of%20buildings%2C%20four%20levels%20of%20building%20training%20data%20samples%20are%20constructed%2C%20and%20YOLOv5%2C%20YOLOv8%2C%20YOLOv9%2C%20and%20YOLOv9%20integrating%20attention%20modules%20are%20selected%20for%20training.%20The%20trained%20models%20are%20used%20to%20test%20the%20shape%20judgment%20of%20buildings%20in%20the%20dataset%20and%20verify%20the%20learning%20effectiveness%20of%20the%20models.%20The%20experimental%20results%20show%20that%20the%20YOLO%20model%20can%20accurately%20classify%20and%20locate%20the%20shape%20of%20buildings%2C%20and%20its%20recognition%20and%20detection%20effect%20have%20the%20ability%20to%20simulate%20advanced%20human%20visual%20cognition%2C%20which%20provides%20a%20new%20solution%20for%20the%20fuzzy%20shape%20recognition%20of%20buildings%20with%20complex%20outlines%20and%20local%20deformation.%22%2C%22date%22%3A%222024%5C%2F12%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi13120433%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F13%5C%2F12%5C%2F433%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-19T18%3A57%3A22Z%22%7D%7D%2C%7B%22key%22%3A%223M8TJDK6%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zou%20et%20al.%22%2C%22parsedDate%22%3A%222024-11-14%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZou%2C%20X.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F13%5C%2F11%5C%2F411%26%23039%3B%26gt%3BClassifying%20the%20Shapes%20of%20Buildings%20by%20Combining%20Distance%20Field%20Enhancement%20and%20a%20Convolution%20Neural%20Network%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Classifying%20the%20Shapes%20of%20Buildings%20by%20Combining%20Distance%20Field%20Enhancement%20and%20a%20Convolution%20Neural%20Network%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xinyan%22%2C%22lastName%22%3A%22Zou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Siyu%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hai%22%2C%22lastName%22%3A%22Hu%22%7D%5D%2C%22abstractNote%22%3A%22The%20shape%20classification%20of%20building%20objects%20is%20crucial%20in%20fields%20such%20as%20map%20generalization%20and%20spatial%20queries.%20Recently%2C%20convolutional%20neural%20networks%20%28CNNs%29%20have%20been%20used%20to%20capture%20high-level%20features%20and%20classify%20building%20shape%20patterns%20based%20on%20raster%20representations.%20However%2C%20this%20raster-based%20deep%20learning%20method%20binarizes%20the%20areas%20into%20building%20and%20non-building%20zones%20and%20does%20not%20account%20for%20the%20distance%20information%20between%20these%20areas%2C%20potentially%20leading%20to%20the%20loss%20of%20shape%20feature%20information.%20To%20address%20this%20limitation%2C%20this%20study%20introduces%20a%20building%20shape%20classification%20method%20that%20incorporates%20distance%20field%20enhancement%20with%20a%20CNN.%20In%20this%20approach%2C%20the%20distance%20from%20various%20pixels%20to%20the%20building%20boundary%20is%20fused%20into%20the%20image%20data%20through%20distance%20field%20enhancement%20computation.%20The%20CNN%20model%2C%20specifically%20InceptionV3%2C%20is%20then%20employed%20to%20learn%20and%20classify%20building%20shapes%20using%20these%20enhanced%20images.%20The%20experimental%20results%20indicate%20that%20the%20accuracy%20of%20building%20shape%20classification%20improved%20by%20more%20than%202.5%25%20following%20distance%20field%20enhancement.%20Notably%2C%20the%20classification%20accuracies%20for%20F-shaped%20and%20T-shaped%20buildings%20increased%20significantly%20by%204.34%25%20and%2011.76%25%2C%20respectively.%20Moreover%2C%20the%20proposed%20method%20demonstrated%20a%20strong%20performance%20in%20classifying%20other%20building%20datasets%2C%20suggesting%20its%20substantial%20potential%20for%20enhancing%20shape%20classification%20in%20various%20applications.%22%2C%22date%22%3A%222024-11-14%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi13110411%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F13%5C%2F11%5C%2F411%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-10-17T18%3A25%3A56Z%22%7D%7D%2C%7B%22key%22%3A%2278ERXJPK%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wang%20et%20al.%22%2C%22parsedDate%22%3A%222024-04-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWang%2C%20J.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS019897152400005X%26%23039%3B%26gt%3BLearning%20visual%20features%20from%20figure-ground%20maps%20for%20urban%20morphology%20discovery%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Learning%20visual%20features%20from%20figure-ground%20maps%20for%20urban%20morphology%20discovery%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jing%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weiming%22%2C%22lastName%22%3A%22Huang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Filip%22%2C%22lastName%22%3A%22Biljecki%22%7D%5D%2C%22abstractNote%22%3A%22Most%20studies%20of%20urban%20morphology%20rely%20on%20morphometrics%2C%20such%20as%20building%20area%20and%20street%20length.%20However%2C%20these%20methods%20often%20fall%20short%20in%20capturing%20visual%20patterns%20that%20carry%20abundant%20information%20about%20the%20configuration%20of%20urban%20elements%20and%20how%20they%20interact%20spatially.%20In%20this%20study%2C%20we%20introduce%20a%20novel%20method%20for%20learning%20morphology%20features%20based%20on%20figure-ground%20maps%2C%20which%20leverages%20recent%20developments%20in%20computer%20vision.%20Our%20method%20facilitates%20discovering%20and%20comparing%20urban%20form%20types%20in%20a%20fully%20unsupervised%20manner.%20Specifically%2C%20we%20examine%20building%20fabrics%20by%201%5Cu00a0km%20patches.%20A%20visual%20representation%20learning%20model%20%28SimCLR%29%20casts%20each%20patch%20into%20a%20latent%20embedding%20space%20where%20similar%20patches%20are%20clustered%20while%20dissimilar%20patches%20are%20dispelled%2C%20thus%20generating%20morphology%20representations%20that%20entail%20the%20layout%20of%20building%20groups.%20The%20learned%20morphology%20features%20are%20tested%20in%20urban%20form%20typology%20clustering%20and%20comparison%20tasks%20in%20four%20diverse%20cities%3A%20Singapore%2C%20San%20Francisco%2C%20Barcelona%2C%20and%20Amsterdam%2C%20with%20data%20sourced%20from%20OpenStreetMap.%20Clustering%20results%20show%20effective%20identification%20of%20typical%20urban%20morphology%20types%20corresponding%20to%20urban%20functions%20and%20historical%20developments.%20Further%20analyses%20based%20on%20the%20representations%20reveal%20inner-%20and%20cross-city%20morphological%20homogeneity%20relating%20to%20socio-economic%20drivers.%20We%20conclude%20that%20this%20method%20is%20a%20promising%20alternative%20for%20effectively%20describing%20urban%20patterns%20in%20morphology%20analysis.%22%2C%22date%22%3A%222024-04-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.compenvurbsys.2024.102076%22%2C%22ISSN%22%3A%220198-9715%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS019897152400005X%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-10-17T18%3A25%3A35Z%22%7D%7D%2C%7B%22key%22%3A%22ISWZDHHB%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLi%2C%20P.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2264756%26%23039%3B%26gt%3BMultiLineStringNet%3A%20a%20deep%20neural%20network%20for%20linear%20feature%20set%20recognition%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22MultiLineStringNet%3A%20a%20deep%20neural%20network%20for%20linear%20feature%20set%20recognition%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pengbo%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haowen%22%2C%22lastName%22%3A%22Yan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiaomin%22%2C%22lastName%22%3A%22Lu%22%7D%5D%2C%22abstractNote%22%3A%22Pattern%20recognition%20of%20linear%20feature%20sets%2C%20such%20as%20river%20networks%2C%20road%20networks%2C%20and%20contour%20clusters%2C%20is%20essential%20in%20cartography%20and%20geographic%20information%20science.%20Previous%20studies%20have%20investigated%20many%20methods%20to%20identify%20the%20patterns%20of%20linear%20feature%20sets%3B%20the%20key%20to%20each%20of%20these%20studies%20is%20to%20generate%20a%20reasonable%20and%20computable%20representation%20for%20each%20set.%20However%2C%20most%20existing%20methods%20are%20only%20designed%20for%20a%20specific%20task%20or%20data%20type%20and%20cannot%20provide%20a%20general%20solution%20for%20formalizing%20linear%20feature%20sets%20owing%20to%20their%20complex%20geometric%20characteristics%2C%20spatial%20relations%20and%20distributions.%20In%20addition%2C%20some%20methods%20require%20human%20involvement%20to%20specify%20characteristics%2C%20choose%20parameters%2C%20and%20determine%20the%20weights%20of%20different%20measures.%20To%20reduce%20human%20intervention%20and%20improve%20adaptability%20to%20various%20feature%20types%2C%20this%20paper%20proposes%20a%20novel%20deep%20learning%20architecture%20for%20learning%20the%20representations%20of%20linear%20feature%20sets.%20The%20presented%20model%20accepts%20vector%20data%20directly%20without%20extra%20data%20conversion%20and%20feature%20extraction.%20After%20generating%20local%2C%20neighborhood%2C%20and%20global%20representations%20of%20inputs%2C%20the%20representations%20are%20aggregated%20accordingly%20to%20perform%20pattern%20recognition%20tasks%2C%20including%20classification%20and%20segmentation.%20In%20the%20experiments%2C%20building%20groups%20classification%20and%20road%20interchanges%20segmentation%20achieved%20accuracies%20of%2098%25%20and%2089%25%2C%20respectively%2C%20indicating%20the%20model%5Cu2019s%20effectiveness%20and%20adaptability.%22%2C%22date%22%3A%222024-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2023.2264756%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2023.2264756%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-18T12%3A56%3A21Z%22%7D%7D%2C%7B%22key%22%3A%226JVKEY4M%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhang%20et%20al.%22%2C%22parsedDate%22%3A%222024%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZhang%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2F10.1111%5C%2Ftgis.13201%26%23039%3B%26gt%3BGraph%20isomorphism%20network%20with%20weighted%20multi%5Cu2010aggregators%20for%20building%20shape%20classification%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Graph%20isomorphism%20network%20with%20weighted%20multi%5Cu2010aggregators%20for%20building%20shape%20classification%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ya%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiping%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yong%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yungang%22%2C%22lastName%22%3A%22Cao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shenghua%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22An%22%2C%22lastName%22%3A%22Luo%22%7D%5D%2C%22abstractNote%22%3A%22Building%20shape%20cognition%20is%20essential%20for%20tasks%2C%20such%20as%20map%20generalization%2C%20urban%20modeling%2C%20and%20building%20semantics%20and%20distribution%20pattern%20recognition.%20Traditional%20geometric%20and%20statistical%20methods%20rely%20on%20human%5Cu2010defined%20shape%20indicators%2C%20and%20spectral%5Cu2010based%20graph%20neural%20networks%20%28GNNs%29%20require%20Laplacian%20eigendecomposition%2C%20resulting%20in%20high%20algorithmic%20complexity.%20Therefore%2C%20we%20proposed%20a%20low%5Cu2010complexity%20and%20simple%5Cu2010to%5Cu2010use%20spatial%5Cu2010domain%20GNN%20for%20differentiating%20building%20shapes.%20To%20examine%20the%20influence%20of%20the%20building%20vertices%20on%20their%20shape%2C%20we%20treated%20each%20building%20as%20a%20graph%20and%20proposed%20a%20graph%20isomorphic%20network%20with%20weighted%20multi%5Cu2010aggregators%20%28GIN%5Cu2010WMA%29%20by%20analyzing%20the%20node%20connectivity%20of%20a%20building%20graph.%20The%20GIN%5Cu2010WMA%20utilizes%20a%20novel%20aggregator%20that%20combines%20the%20sum%20and%20max%20aggregators%2C%20enhancing%20its%20recognition%20and%20differentiation%20capabilities.%20This%20approach%20can%20effectively%20differentiate%20nodes%20that%20have%20identical%20features%20after%20aggregation%20by%20the%20sum%20aggregator.%20We%20extracted%20features%20considering%20both%20local%20node%20and%20global%20shape%20features%2C%20drawing%20inspiration%20from%20Gestalt%20cognitive%20psychology%20and%20GNN%26%23039%3Bs%20%5Cu201cnode%5Cu2013graph%5Cu201d%20differentiation%20strategy.%20In%20addition%2C%20we%20compared%20the%20performance%20of%20GIN%5Cu2010WMA%20with%20existing%20methods%2C%20studying%20the%20effect%20of%20various%20node%20features%20and%20their%20combinations%20on%20classification%20accuracy.%20The%20results%20demonstrated%20that%20GIN%5Cu2010WMA%20outperforms%20other%20methods%20in%20discriminating%20building%20shapes%2C%20demonstrating%20superior%20capabilities%20in%20shape%20classification%20and%20enabling%20end%5Cu2010to%5Cu2010end%20extraction%20and%20classification%20of%20building%20shapes.%22%2C%22date%22%3A%2209%5C%2F2024%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1111%5C%2Ftgis.13201%22%2C%22ISSN%22%3A%221361-1682%2C%201467-9671%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2F10.1111%5C%2Ftgis.13201%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-12T12%3A15%3A32Z%22%7D%7D%2C%7B%22key%22%3A%22UJ5LPXTC%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Liu%20et%20al.%22%2C%22parsedDate%22%3A%222024%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLiu%2C%20P.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Flinkinghub.elsevier.com%5C%2Fretrieve%5C%2Fpii%5C%2FS1569843224001481%26%23039%3B%26gt%3BSecond-order%20texton%20feature%20extraction%20and%20pattern%20recognition%20of%20building%20polygon%20cluster%20using%20CNN%20network%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Second-order%20texton%20feature%20extraction%20and%20pattern%20recognition%20of%20building%20polygon%20cluster%20using%20CNN%20network%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pengcheng%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ziqin%22%2C%22lastName%22%3A%22Shao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tianyuan%22%2C%22lastName%22%3A%22Xiao%22%7D%5D%2C%22abstractNote%22%3A%22The%20cluster%20patterns%20of%20features%20in%20map%20space%20represent%20a%20comprehensive%20reflection%20of%20individual%20feature%20geometric%20attributes%20and%20their%20spatial%20adjacency%20relationships.%20These%20patterns%20also%20embody%20spatial%20cognition%20results%20under%20the%20Gestalt%20principle.%20Describing%20non-linear%20spatial%20cluster%20patterns%20as%20effective%20regular%20structures%20is%20one%20of%20the%20fundamental%20tasks%20in%20deep%20learning%20for%20recognizing%20feature%20cluster%20patterns.%20In%20this%20study%2C%20based%20on%20the%20concept%20of%20texture%20co-occurrence%20matrices%20from%20regular%20gray-scale%20images%2C%20we%20utilized%20Voronoi%20diagrams%20to%20construct%20the%20tessellation%20structure%20of%20building%20polygons.%20Built%20upon%20the%20foundation%20of%20first-order%20texton%20co-occurrence%20matrices%2C%20we%20established%20three-dimensional%20texton%20co-occurrence%20matrices%20for%20building%20polygons%2C%20considered%20five%20attributes%20of%20building%20size%2C%20shape%2C%20orientation%2C%20and%20density%2C%20and%20encompassed%2064%20different%20combinations%20of%20second-order%20neighboring%20directions.%20This%20matrix%20concretizes%20the%20latent%20Gestalt%20spatial%20characteristics%20of%20building%20polygon%20clusters%20into%20a%20three-dimensional%20sparse%20matrix.%20It%20is%20then%20used%20as%20an%20input%20vector%20to%20construct%20a%20deep%20convolutional%20neural%20network%20for%20recognizing%20building%20polygon%20cluster%20patterns.%20Through%20adjustments%20and%20optimizations%20of%20neural%20network%20structure%20and%20strategies%2C%20along%20with%20validation%20through%20practical%20case%20studies%20and%20comparisons%20with%20other%20models%2C%20we%20have%20demonstrated%20the%20effectiveness%20of%20the%20second-order%20texton%20co-occurrence%20matrix%20in%20describing%20the%20characteristics%20of%20building%20polygon%20clusters.%22%2C%22date%22%3A%2205%5C%2F2024%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.jag.2024.103794%22%2C%22ISSN%22%3A%2215698432%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Flinkinghub.elsevier.com%5C%2Fretrieve%5C%2Fpii%5C%2FS1569843224001481%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-12T12%3A15%3A48Z%22%7D%7D%2C%7B%22key%22%3A%223WEXIAJG%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Xu%20et%20al.%22%2C%22parsedDate%22%3A%222023-12-31%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BXu%2C%20J.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10106049.2023.2272662%26%23039%3B%26gt%3BRecognition%20of%20building%20shape%20in%20maps%20using%20deep%20graph%20filter%20neural%20network%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Recognition%20of%20building%20shape%20in%20maps%20using%20deep%20graph%20filter%20neural%20network%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Junkui%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hao%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chun%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jianzhong%22%2C%22lastName%22%3A%22Guo%22%7D%5D%2C%22abstractNote%22%3A%22Shape%20is%20one%20of%20the%20core%20features%20of%20the%20buildings%20which%20are%20the%20main%20elements%20of%20the%20map.%20The%20building%20shape%20recognition%20is%20widely%20used%20in%20many%20spatial%20applications.%20Due%20to%20the%20irregularity%20of%20the%20building%20contour%2C%20it%20is%20still%20challenging%20for%20building%20shape%20recognition.%20Inspired%20by%20graph%20signal%20processing%20theory%2C%20we%20propose%20a%20deep%20graph%20filter%20neural%20network%20%28DGFN%29%20for%20the%20shape%20recognition%20of%20buildings%20in%20maps.%20First%2C%20we%20regard%20shape%20recognition%20as%20a%20combination%20of%20subjective%20and%20objective%20graph%20signal%20filtering%20process.%20Second%2C%20we%20construct%20a%20shape%20features%20extraction%20framework%20from%20the%20perspective%20of%20shape%20details%2C%20shape%20structure%20and%20shape%20local%20information.%20Third%2C%20DGFN%20model%20can%20fulfil%20the%20tasks%20of%20shape%20classification%20and%20shape%20embedding%20of%20building%20at%20the%20same%20time.%20Finally%2C%20multi%20angle%20experiments%20verify%20our%20viewpoint%20of%20shape%20recognition%20mechanism%2C%20and%20the%20comparison%20with%20similar%20algorithms%20proves%20the%20high%20accuracy%20and%20availability%20of%20DGFN%20model.%22%2C%22date%22%3A%222023-12-31%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F10106049.2023.2272662%22%2C%22ISSN%22%3A%221010-6049%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10106049.2023.2272662%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-26T18%3A50%3A33Z%22%7D%7D%2C%7B%22key%22%3A%222T2ZQAL8%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wei%20et%20al.%22%2C%22parsedDate%22%3A%222023-10-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWei%2C%20Z.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F17538947.2023.2259868%26%23039%3B%26gt%3BEnhancing%20building%20pattern%20recognition%20through%20multi-scale%20data%20and%20knowledge%20graph%3A%20a%20case%20study%20of%20C-shaped%20patterns%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Enhancing%20building%20pattern%20recognition%20through%20multi-scale%20data%20and%20knowledge%20graph%3A%20a%20case%20study%20of%20C-shaped%20patterns%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhiwei%22%2C%22lastName%22%3A%22Wei%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenjia%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yi%22%2C%22lastName%22%3A%22Xiao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mi%22%2C%22lastName%22%3A%22Shu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lu%22%2C%22lastName%22%3A%22Cheng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yang%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chunbo%22%2C%22lastName%22%3A%22Liu%22%7D%5D%2C%22abstractNote%22%3A%22Building%20pattern%20recognition%20is%20important%20for%20understanding%20urban%20forms%2C%20automating%20map%20generalization%2C%20and%20visualizing%203D%20city%20models.%20However%2C%20current%20approaches%20based%20on%20object-independent%20methods%20have%20limitations%20in%20capturing%20all%20visually%20aware%20patterns%20due%20to%20the%20part-based%20nature%20of%20human%20vision.%20Moreover%2C%20these%20approaches%20also%20suffer%20from%20inefficiencies%20when%20applying%20proximity%20graph%20models.%20To%20address%20these%20limitations%2C%20we%20propose%20a%20framework%20that%20leverages%20multi-scale%20data%20and%20a%20knowledge%20graph%2C%20focusing%20on%20recognizing%20C-shaped%20building%20patterns.%20We%20first%20employ%20a%20specialized%20knowledge%20graph%20to%20represent%20the%20relationships%20between%20buildings%20within%20and%20across%20various%20scales.%20Subsequently%2C%20we%20convert%20the%20rules%20for%20C-shaped%20pattern%20recognition%20and%20enhancement%20into%20query%20conditions%2C%20where%20the%20enhancement%20refers%20to%20using%20patterns%20recognized%20at%20one%20scale%20to%20enhance%20pattern%20recognition%20at%20other%20scales.%20Finally%2C%20rule-based%20reasoning%20is%20applied%20within%20the%20constructed%20knowledge%20graph%20to%20recognize%20and%20enrich%20C-shaped%20building%20patterns.%20We%20verify%20the%20effectiveness%20of%20our%20method%20using%20multi-scale%20data%20with%20three%20levels%20of%20detail%20%28LODs%29%20collected%20from%20AMap%2C%20and%20our%20method%20achieves%20a%20higher%20recall%20rate%20of%2026.4%25%20for%20LOD1%2C%2020.0%25%20for%20LOD2%2C%20and%209.1%25%20for%20LOD3%20compared%20to%20existing%20methods%20with%20similar%20precision%20rates.%20We%20also%20achieve%20recognition%20efficiency%20improvements%20of%200.91%2C%201.37%2C%20and%209.35%20times%2C%20respectively.%22%2C%22date%22%3A%222023-10-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F17538947.2023.2259868%22%2C%22ISSN%22%3A%221753-8947%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F17538947.2023.2259868%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-26T19%3A16%3A12Z%22%7D%7D%2C%7B%22key%22%3A%227SHCXP77%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Xu%20et%20al.%22%2C%22parsedDate%22%3A%222022-10-03%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BXu%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2022.2048834%26%23039%3B%26gt%3BApplication%20of%20a%20graph%20convolutional%20network%20with%20visual%20and%20semantic%20features%20to%20classify%20urban%20scenes%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Application%20of%20a%20graph%20convolutional%20network%20with%20visual%20and%20semantic%20features%20to%20classify%20urban%20scenes%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yongyang%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shuai%22%2C%22lastName%22%3A%22Jin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhanlong%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xuejing%22%2C%22lastName%22%3A%22Xie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sheng%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhong%22%2C%22lastName%22%3A%22Xie%22%7D%5D%2C%22abstractNote%22%3A%22Urban%20scenes%20consist%20of%20visual%20and%20semantic%20features%20and%20exhibit%20spatial%20relationships%20among%20land-use%20types%20%28e.g.%20industrial%20areas%20are%20far%20away%20from%20the%20residential%20zones%29.%20This%20study%20applied%20a%20graph%20convolutional%20network%20with%20neighborhood%20information%20%28henceforth%2C%20named%20the%20neighbour%20supporting%20graph%20convolutional%20neural%20network%29%2C%20to%20learn%20spatial%20relationships%20for%20urban%20scene%20classification.%20Furthermore%2C%20a%20co-occurrence%20analysis%20with%20visual%20and%20semantic%20features%20proceeded%20to%20improve%20the%20accuracy%20of%20urban%20scene%20classification.%20We%20tested%20the%20proposed%20method%20with%20the%20fifth%20ring%20road%20of%20Beijing%20with%20an%20overall%20classification%20accuracy%20of%200.827%20and%20a%20Kappa%20coefficient%20of%200.769.%20In%20comparison%20with%20other%20methods%2C%20such%20as%20support%20vector%20machine%2C%20random%20forest%2C%20and%20general%20graph%20convolutional%20network%2C%20the%20case%20study%20showed%20that%20the%20proposed%20method%20improved%20about%2010%25%20in%20urban%20scene%20classification.%22%2C%22date%22%3A%222022-10-03%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2022.2048834%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2022.2048834%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A50%3A30Z%22%7D%7D%2C%7B%22key%22%3A%22AVYX3SDY%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yan%20et%20al.%22%2C%22parsedDate%22%3A%222022-05-19%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYan%2C%20X.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10106049.2020.1856195%26%23039%3B%26gt%3BA%20graph%20deep%20learning%20approach%20for%20urban%20building%20grouping%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20graph%20deep%20learning%20approach%20for%20urban%20building%20grouping%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiongfeng%22%2C%22lastName%22%3A%22Yan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tinghua%22%2C%22lastName%22%3A%22Ai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiaohua%22%2C%22lastName%22%3A%22Tong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qian%22%2C%22lastName%22%3A%22Liu%22%7D%5D%2C%22abstractNote%22%3A%22Identifying%20the%20spatial%20configurations%20of%20buildings%20and%20grouping%20them%20reasonably%20is%20an%20important%20task%20in%20cartography.%20This%20study%20developed%20a%20grouping%20approach%20using%20graph%20deep%20learning%20by%20integrating%20multiple%20cognitive%20features%20and%20manual%20cartographic%20experiences.%20Taking%20building%20center%20points%20as%20nodes%2C%20adjacent%20buildings%20were%20organized%20as%20a%20graph%20in%20which%20cognitive%20variables%20including%20size%2C%20orientation%2C%20and%20shape%20were%20defined%20for%20each%20node.%20Then%2C%20a%20learning%20model%20combining%20the%20graph%20convolution%20and%20neural%20network%20was%20designed%20to%20analyse%20the%20adjacent%20buildings%20modelled%20by%20the%20graph.%20The%20center%20points%20of%20groups%20were%20used%20as%20labels%20to%20train%20the%20positions%20of%20graph%20nodes%20and%20finally%2C%20a%20k-means%20algorithm%20was%20employed%20to%20obtain%20the%20grouping%20results%20based%20on%20the%20predicted%20node%20positions.%20Experiments%20confirmed%20that%20our%20approach%20can%20extract%20the%20inherent%20features%20describing%20the%20grouping%20relationship%20between%20buildings%20and%20performed%20better%20than%20two%20existing%20approaches%20referring%20to%20the%20ARI%20index%20%28from%200.647%20to%200.749%29.%22%2C%22date%22%3A%222022-05-19%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F10106049.2020.1856195%22%2C%22ISSN%22%3A%221010-6049%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F10106049.2020.1856195%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A10%3A04Z%22%7D%7D%2C%7B%22key%22%3A%223MKTIIZ5%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Hu%20et%20al.%22%2C%22parsedDate%22%3A%222022-05%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BHu%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F5%5C%2F311%26%23039%3B%26gt%3BFew-Shot%20Building%20Footprint%20Shape%20Classification%20with%20Relation%20Network%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Few-Shot%20Building%20Footprint%20Shape%20Classification%20with%20Relation%20Network%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yaohui%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chun%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zheng%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Junkui%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhigang%22%2C%22lastName%22%3A%22Han%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jianzhong%22%2C%22lastName%22%3A%22Guo%22%7D%5D%2C%22abstractNote%22%3A%22Buildings%20are%20important%20entity%20objects%20of%20cities%2C%20and%20the%20classification%20of%20building%20shapes%20plays%20an%20indispensable%20role%20in%20the%20cognition%20and%20planning%20of%20the%20urban%20structure.%20In%20recent%20years%2C%20some%20deep%20learning%20methods%20have%20been%20proposed%20for%20recognizing%20the%20shapes%20of%20building%20footprints%20in%20modern%20electronic%20maps.%20Furthermore%2C%20their%20performance%20depends%20on%20enough%20labeled%20samples%20for%20each%20class%20of%20building%20footprints.%20However%2C%20it%20is%20impractical%20to%20label%20enough%20samples%20for%20each%20type%20of%20building%20footprint%20shapes.%20Therefore%2C%20the%20deep%20learning%20methods%20using%20few%20labeled%20samples%20are%20more%20preferable%20to%20recognize%20and%20classify%20the%20building%20footprint%20shapes.%20In%20this%20paper%2C%20we%20propose%20a%20relation%20network%20based%20method%20for%20the%20recognization%20of%20building%20footprint%20shapes%20with%20few%20labeled%20samples.%20Relation%20network%2C%20composed%20of%20embedding%20module%20and%20relation%20module%2C%20is%20a%20metric%20based%20few-shot%20method%20which%20aims%20to%20learn%20a%20generalized%20metric%20function%20and%20predict%20the%20types%20of%20the%20new%20samples%20according%20to%20their%20relation%20with%20the%20prototypes%20of%20these%20few%20labeled%20samples.%20To%20better%20extract%20the%20shape%20features%20of%20the%20building%20footprints%20in%20the%20form%20of%20vector%20polygons%2C%20we%20have%20taken%20the%20TriangleConv%20embedding%20module%20to%20act%20as%20the%20embedding%20module%20of%20the%20relation%20network.%20We%20validate%20the%20effectiveness%20of%20our%20method%20based%20on%20a%20building%20footprint%20dataset%20with%2010%20typical%20shapes%20and%20compare%20it%20with%20three%20classical%20few-shot%20learning%20methods%20in%20accuracy.%20The%20results%20show%20that%20our%20method%20performs%20better%20for%20the%20classification%20of%20building%20footprint%20shapes%20with%20few%20labeled%20samples.%20For%20example%2C%20the%20accuracy%20reached%2089.40%25%20for%20the%202-way%205-shot%20classification%20task%20where%20there%20are%20only%20two%20classes%20of%20samples%20in%20the%20task%20and%20five%20labeled%20samples%20for%20each%20class.%22%2C%22date%22%3A%222022%5C%2F5%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi11050311%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F5%5C%2F311%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A59%3A04Z%22%7D%7D%2C%7B%22key%22%3A%226MI3UM3K%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ma%20et%20al.%22%2C%22parsedDate%22%3A%222022-05%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BMa%2C%20L.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F5%5C%2F287%26%23039%3B%26gt%3BA%20New%20Graph-Based%20Fractality%20Index%20to%20Characterize%20Complexity%20of%20Urban%20Form%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20New%20Graph-Based%20Fractality%20Index%20to%20Characterize%20Complexity%20of%20Urban%20Form%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lei%22%2C%22lastName%22%3A%22Ma%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stefan%22%2C%22lastName%22%3A%22Seipel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sven%20Anders%22%2C%22lastName%22%3A%22Brandt%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ding%22%2C%22lastName%22%3A%22Ma%22%7D%5D%2C%22abstractNote%22%3A%22Examining%20the%20complexity%20of%20urban%20form%20may%20help%20to%20understand%20human%20behavior%20in%20urban%20spaces%2C%20thereby%20improving%20the%20conditions%20for%20sustainable%20design%20of%20future%20cities.%20Metrics%2C%20such%20as%20fractal%20dimension%2C%20ht-index%2C%20and%20cumulative%20rate%20of%20growth%20%28CRG%29%20index%20have%20been%20proposed%20to%20measure%20this%20complexity.%20However%2C%20as%20these%20indicators%20are%20statistical%20rather%20than%20spatial%2C%20they%20result%20in%20an%20inability%20to%20characterize%20the%20spatial%20complexity%20of%20urban%20forms%2C%20such%20as%20building%20footprints.%20To%20overcome%20this%20problem%2C%20this%20paper%20proposes%20a%20graph-based%20fractality%20index%20%28GFI%29%2C%20which%20is%20based%20on%20a%20hybrid%20of%20fractal%20theory%20and%20deep%20learning%20techniques.%20First%2C%20to%20quantify%20the%20spatial%20complexity%2C%20several%20fractal%20variants%20were%20synthesized%20to%20train%20a%20deep%20graph%20convolutional%20neural%20network.%20Next%2C%20building%20footprints%20in%20London%20were%20used%20to%20test%20the%20method%2C%20where%20the%20results%20showed%20that%20the%20proposed%20framework%20performed%20better%20than%20the%20traditional%20indices%2C%20i.e.%2C%20the%20index%20is%20capable%20of%20differentiating%20complex%20patterns.%20Another%20advantage%20is%20that%20it%20seems%20to%20assure%20that%20the%20trained%20deep%20learning%20is%20objective%20and%20not%20affected%20by%20potential%20biases%20in%20empirically%20selected%20training%20datasets%20Furthermore%2C%20the%20possibility%20to%20connect%20fractal%20theory%20and%20deep%20learning%20techniques%20on%20complexity%20issues%20opens%20up%20new%20possibilities%20for%20data-driven%20GIS%20science.%22%2C%22date%22%3A%222022%5C%2F5%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi11050287%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F11%5C%2F5%5C%2F287%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A11%3A23Z%22%7D%7D%2C%7B%22key%22%3A%22WEA2HK8N%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20et%20al.%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLi%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2076-3417%5C%2F12%5C%2F19%5C%2F10001%26%23039%3B%26gt%3BA%20Skeleton-Line-Based%20Graph%20Convolutional%20Neural%20Network%20for%20Areal%20Settlements%26%23039%3B%20Shape%20Classification%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20Skeleton-Line-Based%20Graph%20Convolutional%20Neural%20Network%20for%20Areal%20Settlements%27%20Shape%20Classification%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yiyan%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiaomin%22%2C%22lastName%22%3A%22Lu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haowen%22%2C%22lastName%22%3A%22Yan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenning%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pengbo%22%2C%22lastName%22%3A%22Li%22%7D%5D%2C%22abstractNote%22%3A%22Among%20the%20geographic%20elements%2C%20shape%20recognition%20and%20classification%20is%20one%20of%20the%20im%20portant%20elements%20of%20map%20cartographic%20generalization%2C%20and%20the%20shape%20classification%20of%20an%20areal%20settlement%20is%20an%20important%20part%20of%20geospatial%20vector%20data.%20However%2C%20there%20is%20currently%20no%20relatively%20simple%20and%20efficient%20way%20to%20achieve%20areal%20settlement%20classification.%20Therefore%2C%20we%20combined%20the%20skeleton%20line%20vector%20data%20of%20an%20areal%20settlement%20and%20the%20graph%20convolutional%20neural%20network%20to%20propose%20an%20areal%20settlement%20shape%20classification%20method%20that%20%281%29%20extracts%20the%20skeleton%20line%20of%20the%20areal%20settlement%20to%20form%20a%20dual%20graph%20with%20nodes%20as%20edges%2C%20%282%29%20extracts%20multiple%20features%20to%20obtain%20a%20graph%20representation%20of%20the%20shape%2C%20%283%29%20extracts%20and%20aggregates%20the%20shape%20information%20represented%20by%20the%20areal%20settlement%20skeleton%20line%20using%20the%20graph%20convolutional%20neural%20network%20for%20multiple%20rounds%20to%20extract%20high-dimensional%20shape%20information%2C%20and%20%284%29%20completes%20the%20shape%20classification%20of%20the%20high-dimensional%20shape%20information.%20The%20experiment%20used%20240%20samples%2C%20and%20the%20classification%20accuracy%20was%2093.3%25%2C%20with%20areal%20settlement%20shapes%20of%20E-%2C%20F-%2C%20and%20H-type%20achieving%20F-measures%20of%2096.5%25%2C%2092.3%25%2C%20and%20100%25%2C%20respectively.%20The%20result%20shows%20that%20the%20classification%20method%20of%20the%20areal%20settlement%20shape%20has%20high%20accuracy.%22%2C%22date%22%3A%222022%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fapp121910001%22%2C%22ISSN%22%3A%222076-3417%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2076-3417%5C%2F12%5C%2F19%5C%2F10001%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A07%3A00Z%22%7D%7D%2C%7B%22key%22%3A%22VERPGUHH%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Liu%20et%20al.%22%2C%22parsedDate%22%3A%222021-10%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLiu%2C%20C.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F10%5C%2F10%5C%2F687%26%23039%3B%26gt%3BTriangleConv%3A%20A%20Deep%20Point%20Convolutional%20Network%20for%20Recognizing%20Building%20Shapes%20in%20Map%20Space%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22TriangleConv%3A%20A%20Deep%20Point%20Convolutional%20Network%20for%20Recognizing%20Building%20Shapes%20in%20Map%20Space%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chun%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yaohui%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zheng%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Junkui%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhigang%22%2C%22lastName%22%3A%22Han%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jianzhong%22%2C%22lastName%22%3A%22Guo%22%7D%5D%2C%22abstractNote%22%3A%22The%20classification%20and%20recognition%20of%20the%20shapes%20of%20buildings%20in%20map%20space%20play%20an%20important%20role%20in%20spatial%20cognition%2C%20cartographic%20generalization%2C%20and%20map%20updating.%20As%20buildings%20in%20map%20space%20are%20often%20represented%20as%20the%20vector%20data%2C%20research%20was%20conducted%20to%20learn%20the%20feature%20representations%20of%20the%20buildings%20and%20recognize%20their%20shapes%20based%20on%20graph%20neural%20networks.%20Due%20to%20the%20principles%20of%20graph%20neural%20networks%2C%20it%20is%20necessary%20to%20construct%20a%20graph%20to%20represent%20the%20adjacency%20relationships%20between%20the%20points%20%28i.e.%2C%20the%20vertices%20of%20the%20polygons%20shaping%20the%20buildings%29%2C%20and%20extract%20a%20list%20of%20geometric%20features%20for%20each%20point.%20This%20paper%20proposes%20a%20deep%20point%20convolutional%20network%20to%20recognize%20building%20shapes%2C%20which%20executes%20the%20convolution%20directly%20on%20the%20points%20of%20the%20buildings%20without%20constructing%20the%20graphs%20and%20extracting%20the%20geometric%20features%20of%20the%20points.%20A%20new%20convolution%20operator%20named%20TriangleConv%20was%20designed%20to%20learn%20the%20feature%20representations%20of%20each%20point%20by%20aggregating%20the%20features%20of%20the%20point%20and%20the%20local%20triangle%20constructed%20by%20the%20point%20and%20its%20two%20adjacency%20points.%20The%20proposed%20method%20was%20evaluated%20and%20compared%20with%20related%20methods%20based%20on%20a%20dataset%20consisting%20of%205010%20vector%20buildings.%20In%20terms%20of%20accuracy%2C%20macro-precision%2C%20macro-recall%2C%20and%20macro-F1%2C%20the%20results%20show%20that%20the%20proposed%20method%20has%20comparable%20performance%20with%20typical%20graph%20neural%20networks%20of%20GCN%2C%20GAT%2C%20and%20GraphSAGE%2C%20and%20point%20cloud%20neural%20networks%20of%20PointNet%2C%20PointNet%2B%2B%2C%20and%20DGCNN%20in%20the%20task%20of%20recognizing%20and%20classifying%20building%20shapes%20in%20map%20space.%22%2C%22date%22%3A%222021%5C%2F10%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi10100687%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F10%5C%2F10%5C%2F687%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A09%3A20Z%22%7D%7D%2C%7B%22key%22%3A%2268AI5S2K%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yan%20et%20al.%22%2C%22parsedDate%22%3A%222021-03-04%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYan%2C%20X.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2020.1768260%26%23039%3B%26gt%3BGraph%20convolutional%20autoencoder%20model%20for%20the%20shape%20coding%20and%20cognition%20of%20buildings%20in%20maps%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Graph%20convolutional%20autoencoder%20model%20for%20the%20shape%20coding%20and%20cognition%20of%20buildings%20in%20maps%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiongfeng%22%2C%22lastName%22%3A%22Yan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tinghua%22%2C%22lastName%22%3A%22Ai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiaohua%22%2C%22lastName%22%3A%22Tong%22%7D%5D%2C%22abstractNote%22%3A%22The%20shape%20of%20a%20geospatial%20object%20is%20an%20important%20characteristic%20and%20a%20significant%20factor%20in%20spatial%20cognition.%20Existing%20shape%20representation%20methods%20for%20vector-structured%20objects%20in%20the%20map%20space%20are%20mainly%20based%20on%20geometric%20and%20statistical%20measures.%20Considering%20that%20shape%20is%20complicated%20and%20cognitively%20related%2C%20this%20study%20develops%20a%20learning%20strategy%20to%20combine%20multiple%20features%20extracted%20from%20its%20boundary%20and%20obtain%20a%20reasonable%20shape%20representation.%20Taking%20building%20data%20as%20example%2C%20this%20study%20first%20models%20the%20shape%20of%20a%20building%20using%20a%20graph%20structure%20and%20extracts%20multiple%20features%20for%20each%20vertex%20based%20on%20the%20local%20and%20regional%20structures.%20A%20graph%20convolutional%20autoencoder%20%28GCAE%29%20model%20comprising%20graph%20convolution%20and%20autoencoder%20architecture%20is%20proposed%20to%20analyze%20the%20modeled%20graph%20and%20realize%20shape%20coding%20through%20unsupervised%20learning.%20Experiments%20show%20that%20the%20GCAE%20model%20can%20produce%20a%20cognitively%20compliant%20shape%20coding%2C%20with%20the%20ability%20to%20distinguish%20different%20shapes.%20It%20outperforms%20existing%20methods%20in%20terms%20of%20similarity%20measurements.%20Furthermore%2C%20the%20shape%20coding%20is%20experimentally%20proven%20to%20be%20effective%20in%20representing%20the%20local%20and%20global%20characteristics%20of%20building%20shape%20in%20application%20scenarios%20such%20as%20shape%20retrieval%20and%20matching.%22%2C%22date%22%3A%222021-03-04%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2020.1768260%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2020.1768260%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A51%3A45Z%22%7D%7D%2C%7B%22key%22%3A%22TDCZHKJS%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhao%20et%20al.%22%2C%22parsedDate%22%3A%222020-09-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZhao%2C%20R.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2020.1757512%26%23039%3B%26gt%3BRecognition%20of%20building%20group%20patterns%20using%20graph%20convolutional%20network%26lt%3B%5C%2Fa%26gt%3B.%202020%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Recognition%20of%20building%20group%20patterns%20using%20graph%20convolutional%20network%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rong%22%2C%22lastName%22%3A%22Zhao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tinghua%22%2C%22lastName%22%3A%22Ai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wenhao%22%2C%22lastName%22%3A%22Yu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yakun%22%2C%22lastName%22%3A%22He%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yilang%22%2C%22lastName%22%3A%22Shen%22%7D%5D%2C%22abstractNote%22%3A%22Recognition%20of%20building%20group%20patterns%20is%20of%20great%20significance%20for%20understanding%20and%20modeling%20the%20urban%20space.%20However%2C%20many%20current%20methods%20cannot%20fully%20utilize%20spatial%20information%20and%20have%20trouble%20efficiently%20dealing%20with%20topographic%20data%20with%20high%20complexity.%20The%20design%20of%20intelligent%20computational%20models%20that%20can%20act%20directly%20on%20topographic%20data%20to%20extract%20spatial%20features%20is%20critical.%20To%20this%20end%2C%20we%20propose%20a%20novel%20deep%20neural%20network%20based%20on%20graph%20convolutions%20to%20automatically%20identify%20building%20group%20patterns%20with%20arbitrary%20forms.%20The%20method%20first%20models%20buildings%20by%20a%20general%20graph%2C%20and%20then%20the%20neural%20network%20simultaneously%20learns%20the%20structural%20information%20as%20well%20as%20vertex%20attributes%20to%20classify%20building%20objects.%20We%20apply%20this%20method%20to%20real%20building%20data%2C%20and%20the%20experimental%20results%20show%20that%20the%20proposed%20method%20can%20effectively%20capture%20spatial%20information%20to%20make%20more%20accurate%20predictions%20than%20traditional%20methods.%22%2C%22date%22%3A%222020-09-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2020.1757512%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2020.1757512%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A55%3A40Z%22%7D%7D%2C%7B%22key%22%3A%222X7GQPES%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yan%20et%20al.%22%2C%22parsedDate%22%3A%222019-04-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYan%2C%20X.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0924271619300437%26%23039%3B%26gt%3BA%20graph%20convolutional%20neural%20network%20for%20classification%20of%20building%20patterns%20using%20spatial%20vector%20data%26lt%3B%5C%2Fa%26gt%3B.%202019%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20graph%20convolutional%20neural%20network%20for%20classification%20of%20building%20patterns%20using%20spatial%20vector%20data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiongfeng%22%2C%22lastName%22%3A%22Yan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tinghua%22%2C%22lastName%22%3A%22Ai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hongmei%22%2C%22lastName%22%3A%22Yin%22%7D%5D%2C%22abstractNote%22%3A%22Machine%20learning%20methods%2C%20specifically%2C%20convolutional%20neural%20networks%20%28CNNs%29%2C%20have%20emerged%20as%20an%20integral%20part%20of%20scientific%20research%20in%20many%20disciplines.%20However%2C%20these%20powerful%20methods%20often%20fail%20to%20perform%20pattern%20analysis%20and%20knowledge%20mining%20with%20spatial%20vector%20data%20because%20in%20most%20cases%2C%20such%20data%20are%20not%20underlying%20grid-like%20or%20array%20structures%20but%20can%20only%20be%20modeled%20as%20graph%20structures.%20The%20present%20study%20introduces%20a%20novel%20graph%20convolution%20by%20converting%20it%20from%20the%20vertex%20domain%20into%20a%20point-wise%20product%20in%20the%20Fourier%20domain%20using%20the%20graph%20Fourier%20transform%20and%20convolution%20theorem.%20In%20addition%2C%20the%20graph%20convolutional%20neural%20network%20%28GCNN%29%20architecture%20is%20proposed%20to%20analyze%20graph-structured%20spatial%20vector%20data.%20The%20focus%20of%20this%20study%20is%20the%20classical%20task%20of%20building%20pattern%20classification%2C%20which%20remains%20limited%20by%20the%20use%20of%20design%20rules%20and%20manually%20extracted%20features%20for%20specific%20patterns.%20The%20spatial%20vector%20data%20representing%20grouped%20buildings%20are%20modeled%20as%20graphs%2C%20and%20indices%20for%20the%20characteristics%20of%20individual%20buildings%20are%20investigated%20to%20collect%20the%20input%20variables.%20The%20pattern%20features%20of%20these%20graphs%20are%20directly%20extracted%20by%20training%20labeled%20data.%20Experiments%20confirmed%20that%20the%20GCNN%20produces%20satisfactory%20results%20in%20terms%20of%20identifying%20regular%20and%20irregular%20patterns%2C%20and%20thus%20achieves%20a%20significant%20improvement%20over%20existing%20methods.%20In%20summary%2C%20the%20GCNN%20has%20considerable%20potential%20for%20the%20analysis%20of%20graph-structured%20spatial%20vector%20data%20as%20well%20as%20scope%20for%20further%20improvement.%22%2C%22date%22%3A%222019-04-01%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.isprsjprs.2019.02.010%22%2C%22ISSN%22%3A%220924-2716%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0924271619300437%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A51%3A24Z%22%7D%7D%2C%7B%22key%22%3A%22FE2MV2F4%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Lee%20et%20al.%22%2C%22parsedDate%22%3A%222017-10%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLee%2C%20J.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F6%5C%2F10%5C%2F309%26%23039%3B%26gt%3BMachine%20Learning%20Classification%20of%20Buildings%20for%20Map%20Generalization%26lt%3B%5C%2Fa%26gt%3B.%202017%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Machine%20Learning%20Classification%20of%20Buildings%20for%20Map%20Generalization%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jaeeun%22%2C%22lastName%22%3A%22Lee%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hanme%22%2C%22lastName%22%3A%22Jang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jonghyeon%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kiyun%22%2C%22lastName%22%3A%22Yu%22%7D%5D%2C%22abstractNote%22%3A%22A%20critical%20problem%20in%20mapping%20data%20is%20the%20frequent%20updating%20of%20large%20data%20sets.%20To%20solve%20this%20problem%2C%20the%20updating%20of%20small-scale%20data%20based%20on%20large-scale%20data%20is%20very%20effective.%20Various%20map%20generalization%20techniques%2C%20such%20as%20simplification%2C%20displacement%2C%20typification%2C%20elimination%2C%20and%20aggregation%2C%20must%20therefore%20be%20applied.%20In%20this%20study%2C%20we%20focused%20on%20the%20elimination%20and%20aggregation%20of%20the%20building%20layer%2C%20for%20which%20each%20building%20in%20a%20large%20scale%20was%20classified%20as%20%5Cu201c0-eliminated%2C%5Cu201d%20%5Cu201c1-retained%2C%5Cu201d%20or%20%5Cu201c2-aggregated.%5Cu201d%20Machine-learning%20classification%20algorithms%20were%20then%20used%20for%20classifying%20the%20buildings.%20The%20data%20of%201%3A1000%20scale%20and%201%3A25%2C000%20scale%20digital%20maps%20obtained%20from%20the%20National%20Geographic%20Information%20Institute%20were%20used.%20We%20applied%20to%20these%20data%20various%20machine-learning%20classification%20algorithms%2C%20including%20naive%20Bayes%20%28NB%29%2C%20decision%20tree%20%28DT%29%2C%20k-nearest%20neighbor%20%28k-NN%29%2C%20and%20support%20vector%20machine%20%28SVM%29.%20The%20overall%20accuracies%20of%20each%20algorithm%20were%20satisfactory%3A%20DT%2C%2088.96%25%3B%20k-NN%2C%2088.27%25%3B%20SVM%2C%2087.57%25%3B%20and%20NB%2C%2079.50%25.%20Although%20elimination%20is%20a%20direct%20part%20of%20the%20proposed%20process%2C%20generalization%20operations%2C%20such%20as%20simplification%20and%20aggregation%20of%20polygons%2C%20must%20still%20be%20performed%20for%20buildings%20classified%20as%20retained%20and%20aggregated.%20Thus%2C%20these%20algorithms%20can%20be%20used%20for%20building%20classification%20and%20can%20serve%20as%20preparatory%20steps%20for%20building%20generalization.%22%2C%22date%22%3A%222017%5C%2F10%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi6100309%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F6%5C%2F10%5C%2F309%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A05%3A42Z%22%7D%7D%5D%7D
Liu, T. et al. Recognition of building group patterns using GCN and knowledge graph. 2025
Wang, J. et al. Learning visual features from figure-ground maps for urban morphology discovery. 2024
Li, P. et al. MultiLineStringNet: a deep neural network for linear feature set recognition. 2024
Zhang, Y. et al. Graph isomorphism network with weighted multi‐aggregators for building shape classification. 2024
Xu, J. et al. Recognition of building shape in maps using deep graph filter neural network. 2023
Yan, X. et al. A graph deep learning approach for urban building grouping. 2022
Hu, Y. et al. Few-Shot Building Footprint Shape Classification with Relation Network. 2022
Ma, L. et al. A New Graph-Based Fractality Index to Characterize Complexity of Urban Form. 2022
Zhao, R. et al. Recognition of building group patterns using graph convolutional network. 2020
Lee, J. et al. Machine Learning Classification of Buildings for Map Generalization. 2017
Cleaning and Conflation
5447768
cleaning, conflation
1
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
Void Filling
5447768
void filling
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%223VD5IVPC%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yue%20et%20al.%22%2C%22parsedDate%22%3A%222024%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYue%2C%20L.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F10543133%26%23039%3B%26gt%3BGenerative%20DEM%20Void%20Filling%20With%20Terrain%20Feature-Guided%20Transfer%20Learning%20Assisted%20by%20Remote%20Sensing%20Images%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Generative%20DEM%20Void%20Filling%20With%20Terrain%20Feature-Guided%20Transfer%20Learning%20Assisted%20by%20Remote%20Sensing%20Images%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Linwei%22%2C%22lastName%22%3A%22Yue%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bing%22%2C%22lastName%22%3A%22Gao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xianwei%22%2C%22lastName%22%3A%22Zheng%22%7D%5D%2C%22abstractNote%22%3A%22The%20quality%20of%20digital%20elevation%20models%20%28DEMs%29%20is%20easily%20affected%20by%20data%20voids%20in%20regions%20with%20complex%20terrain%20conditions.%20Numerous%20methods%20have%20been%20proposed%20to%20fill%20DEM%20voids%20by%20effectively%20exploiting%20the%20topographic%20information%20from%20neighboring%20areas%20or%20auxiliary%20DEMs.%20However%2C%20few%20studies%20have%20considered%20the%20integration%20of%20multimodal%20data%2C%20which%20can%20provide%20valuable%20supplementary%20information%20in%20the%20areas%20with%20no%20high-quality%20reference%20DEM%20data.%20In%20this%20letter%2C%20we%20propose%20a%20generative%20DEM%20void%20filling%20method%20by%20exploring%20the%20integration%20of%20optical%20remote%20sensing%20images.%20The%20core%20idea%20is%20to%20utilize%20the%20image%20textures%20to%20infer%20the%20elevation%20values%20in%20the%20void%20regions%20with%20terrain%20texture-guided%20transfer%20learning.%20Specifically%2C%20the%20image%20context%20attention%20module%20%28ICAM%29%20is%20used%20to%20preliminarily%20estimate%20the%20missing%20topographic%20features%20by%20searching%20the%20similar%20patches%20with%20the%20guidance%20of%20image%20context.%20The%20terrain%20feature-guided%20residual%20pixel%20attention%20block%20%28TFG-RPAB%29%20is%20then%20employed%20to%20refine%20the%20void-filled%20features%20by%20transferring%20the%20image%20textures%20to%20topographic%20features.%20Finally%2C%20the%20void-filled%20DEM%20can%20be%20obtained%20by%20decoding%20the%20reconstructed%20topographic%20features.%20The%20results%20show%20that%20the%20root-mean-square%20error%20%28RMSE%29%20of%20RSAGAN%20is%20improved%20by%2014.5%25%5Cu201371.5%25%20when%20DEM%20void%20filling.%20Both%20quantitative%20and%20qualitative%20evaluations%20demonstrate%20the%20superiority%20of%20the%20proposed%20method%20over%20the%20competitive%20methods%20in%20terms%20of%20DEM%20void%20filling.%20The%20source%20code%20is%20available%20at%20https%3A%5C%2F%5C%2Fgithub.com%5C%2Fgaobingcug%5C%2FRSAGAN.%22%2C%22date%22%3A%222024%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FLGRS.2024.3407930%22%2C%22ISSN%22%3A%221558-0571%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F10543133%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-19T19%3A29%3A51Z%22%7D%7D%2C%7B%22key%22%3A%22LUS3LLLK%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Fang%20et%20al.%22%2C%22parsedDate%22%3A%222022-10-03%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BFang%2C%20Z.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2022.2072849%26%23039%3B%26gt%3BA%20topography-aware%20approach%20to%20the%20automatic%20generation%20of%20urban%20road%20networks%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20topography-aware%20approach%20to%20the%20automatic%20generation%20of%20urban%20road%20networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhou%22%2C%22lastName%22%3A%22Fang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiaxin%22%2C%22lastName%22%3A%22Qi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lubin%22%2C%22lastName%22%3A%22Fan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jianqiang%22%2C%22lastName%22%3A%22Huang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ying%22%2C%22lastName%22%3A%22Jin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tianren%22%2C%22lastName%22%3A%22Yang%22%7D%5D%2C%22abstractNote%22%3A%22Existing%20deep-learning%20tools%20for%20road%20network%20generation%20have%20limited%20applications%20in%20flat%20urban%20areas%20due%20to%20their%20overreliance%20on%20the%20geometric%20and%20spatial%20configurations%20of%20street%20networks%20and%20inadequate%20considerations%20of%20topographic%20information.%20This%20paper%20proposes%20a%20new%20method%20of%20street%20network%20generation%20based%20on%20a%20generative%20adversarial%20network%20by%20designing%20a%20pre-positioned%20geo-extractor%20module%20and%20a%20geo-merging%20bypath.%20The%20two%20improvements%20employ%20the%20complementary%20use%20of%20geometric%20configurations%20and%20topographic%20features%20to%20automate%20street%20network%20generation%20in%20both%20flat%20and%20hilly%20urban%20areas.%20Our%20experiments%20demonstrate%20that%20the%20improved%20model%20yields%20a%20more%20realistic%20prediction%20of%20street%20configurations%20than%20conventional%20image%20inpainting%20techniques.%20The%20model%5Cu2019s%20effectiveness%20is%20further%20enhanced%20when%20generating%20streets%20in%20hilly%20areas.%20Furthermore%2C%20the%20geo-extractor%20module%20provides%20insights%20from%20the%20computer%20vision%20perspective%20in%20recognizing%20when%20topographic%20information%20should%20be%20considered%20and%20which%20topographic%20information%20should%20receive%20more%20attention.%22%2C%22date%22%3A%222022-10-03%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2022.2072849%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2022.2072849%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A54%3A26Z%22%7D%7D%2C%7B%22key%22%3A%22WQBWT9PJ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22St%5Cu00f6lzle%20et%20al.%22%2C%22parsedDate%22%3A%222022-04%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSt%5Cu00f6lzle%2C%20M.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9676411%26%23039%3B%26gt%3BReconstructing%20Occluded%20Elevation%20Information%20in%20Terrain%20Maps%20With%20Self-Supervised%20Learning%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Reconstructing%20Occluded%20Elevation%20Information%20in%20Terrain%20Maps%20With%20Self-Supervised%20Learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Maximilian%22%2C%22lastName%22%3A%22St%5Cu00f6lzle%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Takahiro%22%2C%22lastName%22%3A%22Miki%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Levin%22%2C%22lastName%22%3A%22Gerdes%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Martin%22%2C%22lastName%22%3A%22Azkarate%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Marco%22%2C%22lastName%22%3A%22Hutter%22%7D%5D%2C%22abstractNote%22%3A%22Accurate%20and%20complete%20terrain%20maps%20enhance%20the%20awareness%20of%20autonomous%20robots%20and%20enable%20safe%20and%20optimal%20path%20planning.%20Rocks%20and%20topography%20often%20create%20occlusions%20and%20lead%20to%20missing%20elevation%20information%20in%20the%20Digital%20Elevation%20Map%20%28DEM%29.%20Currently%2C%20these%20occluded%20areas%20are%20either%20fully%20avoided%20during%20motion%20planning%20or%20the%20missing%20values%20in%20the%20elevation%20map%20are%20filled-in%20using%20traditional%20interpolation%2C%20diffusion%20or%20patch-matching%20techniques.%20These%20methods%20cannot%20leverage%20the%20high-level%20terrain%20characteristics%20and%20the%20geometric%20constraints%20of%20line%20of%20sight%20we%20humans%20use%20intuitively%20to%20predict%20occluded%20areas.%20We%20introduce%20a%20self-supervised%20learning%20approach%20capable%20of%20training%20on%20real-world%20data%20without%20a%20need%20for%20ground-truth%20information%20to%20reconstruct%20the%20occluded%20areas%20in%20the%20DEMs.%20We%20accomplish%20this%20by%20adding%20artificial%20occlusion%20to%20the%20incomplete%20elevation%20maps%20constructed%20on%20a%20real%20robot%20by%20performing%20ray%20casting.%20We%20first%20evaluate%20a%20supervised%20learning%20approach%20on%20synthetic%20data%20for%20which%20we%20have%20the%20full%20ground-truth%20available%20and%20subsequently%20move%20to%20several%20real-world%20datasets.%20These%20real-world%20datasets%20were%20recorded%20during%20exploration%20of%20both%20structured%20and%20unstructured%20terrain%20with%20a%20legged%20robot%2C%20and%20additionally%20in%20a%20planetary%20scenario%20on%20Lunar%20analogue%20terrain.%20We%20state%20a%20significant%20improvement%20compared%20to%20the%20baseline%20methods%20both%20on%20synthetic%20terrain%20and%20for%20the%20real-world%20datasets.%20Our%20neural%20network%20is%20able%20to%20run%20in%20real-time%20on%20both%20CPU%20and%20GPU%20with%20suitable%20sampling%20rates%20for%20autonomous%20ground%20robots.%20We%20motivate%20the%20applicability%20of%20reconstructing%20occlusion%20in%20elevation%20maps%20with%20preliminary%20motion%20planning%20experiments.%22%2C%22date%22%3A%222022-04%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FLRA.2022.3141662%22%2C%22ISSN%22%3A%222377-3766%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9676411%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A21%3A28Z%22%7D%7D%2C%7B%22key%22%3A%227TRKR42T%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20et%20al.%22%2C%22parsedDate%22%3A%222022-02-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLi%2C%20S.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0034425721005381%26%23039%3B%26gt%3BIntegrating%20topographic%20knowledge%20into%20deep%20learning%20for%20the%20void-filling%20of%20digital%20elevation%20models%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Integrating%20topographic%20knowledge%20into%20deep%20learning%20for%20the%20void-filling%20of%20digital%20elevation%20models%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sijin%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guanghui%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xinghua%22%2C%22lastName%22%3A%22Cheng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Liyang%22%2C%22lastName%22%3A%22Xiong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guoan%22%2C%22lastName%22%3A%22Tang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Josef%22%2C%22lastName%22%3A%22Strobl%22%7D%5D%2C%22abstractNote%22%3A%22Digital%20elevation%20models%20%28DEMs%29%20contain%20some%20of%20the%20most%20important%20data%20for%20providing%20terrain%20information%20and%20supporting%20environmental%20analyses.%20However%2C%20the%20applications%20of%20DEMs%20are%20significantly%20limited%20by%20data%20voids%2C%20which%20are%20commonly%20found%20in%20regions%20with%20rugged%20terrain.%20We%20propose%20a%20novel%20deep%20learning-based%20strategy%20called%20a%20topographic%20knowledge-constrained%20conditional%20generative%20adversarial%20network%20%28TKCGAN%29%20to%20fill%20data%20voids%20in%20DEMs.%20Shuttle%20Radar%20Topography%20Mission%20%28SRTM%29%20data%20with%20spatial%20resolutions%20of%203%20and%201%20arc-seconds%20are%20used%20in%20experiments%20to%20demonstrate%20the%20applicability%20of%20the%20TKCGAN.%20Qualitative%20topographic%20knowledge%20of%20valleys%20and%20ridges%20is%20transformed%20into%20new%20loss%20functions%20that%20can%20be%20applied%20in%20deep%20learning-based%20algorithms%20and%20constrain%20the%20training%20process.%20The%20results%20show%20that%20the%20TKCGAN%20outperforms%20other%20common%20methods%20in%20filling%20voids%20and%20improves%20the%20elevation%20and%20surface%20slope%20accuracy%20of%20the%20reconstruction%20results.%20The%20performance%20of%20the%20TKCGAN%20is%20stable%20in%20the%20test%20areas%20and%20reduces%20the%20error%20in%20the%20regions%20with%20medium%20and%20high%20surface%20slopes.%20Furthermore%2C%20the%20analysis%20of%20profiles%20indicates%20that%20the%20TKCGAN%20achieves%20better%20performance%20according%20to%20a%20visual%20inspection%20and%20quantitative%20comparison.%20In%20addition%2C%20the%20proposed%20strategy%20can%20be%20applied%20to%20DEMs%20with%20different%20resolutions.%20This%20work%20is%20an%20endeavour%20to%20transform%20topographic%20knowledge%20into%20computer-processable%20rules%20and%20benefits%20future%20research%20related%20to%20terrain%20reconstruction%20and%20modelling.%22%2C%22date%22%3A%222022-02-01%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.rse.2021.112818%22%2C%22ISSN%22%3A%220034-4257%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0034425721005381%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A06%3A46Z%22%7D%7D%2C%7B%22key%22%3A%227E6R2IIA%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhou%20et%20al.%22%2C%22parsedDate%22%3A%222022-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZhou%2C%20G.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2072-4292%5C%2F14%5C%2F5%5C%2F1206%26%23039%3B%26gt%3BVoids%20Filling%20of%20DEM%20with%20Multiattention%20Generative%20Adversarial%20Network%20Model%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Voids%20Filling%20of%20DEM%20with%20Multiattention%20Generative%20Adversarial%20Network%20Model%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guoqing%22%2C%22lastName%22%3A%22Zhou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bo%22%2C%22lastName%22%3A%22Song%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peng%22%2C%22lastName%22%3A%22Liang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiasheng%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tao%22%2C%22lastName%22%3A%22Yue%22%7D%5D%2C%22abstractNote%22%3A%22The%20digital%20elevation%20model%20%28DEM%29%20acquired%20through%20photogrammetry%20or%20LiDAR%20usually%20exposes%20voids%20due%20to%20phenomena%20such%20as%20instrumentation%20artifact%2C%20ground%20occlusion%2C%20etc.%20For%20this%20reason%2C%20this%20paper%20proposes%20a%20multiattention%20generative%20adversarial%20network%20model%20to%20fill%20the%20voids.%20In%20this%20model%2C%20a%20multiscale%20feature%20fusion%20generation%20network%20is%20proposed%20to%20initially%20fill%20the%20voids%2C%20and%20then%20a%20multiattention%20filling%20network%20is%20proposed%20to%20recover%20the%20detailed%20features%20of%20the%20terrain%20surrounding%20the%20void%20area%2C%20and%20the%20channel-spatial%20cropping%20attention%20mechanism%20module%20is%20proposed%20as%20an%20enhancement%20of%20the%20network.%20Spectral%20normalization%20is%20added%20to%20each%20convolution%20layer%20in%20the%20discriminator%20network.%20Finally%2C%20the%20training%20of%20the%20model%20by%20a%20combined%20loss%20function%2C%20including%20reconstruction%20loss%20and%20adversarial%20loss%2C%20is%20optimized.%20Three%20groups%20of%20experiments%20with%20four%20different%20types%20of%20terrains%2C%20hillsides%2C%20valleys%2C%20ridges%20and%20hills%2C%20are%20conducted%20for%20validation%20of%20the%20proposed%20model.%20The%20experimental%20results%20show%20that%20%281%29%20the%20structural%20similarity%20surrounding%20terrestrial%20voids%20in%20the%20three%20types%20of%20terrains%20%28i.e.%2C%20hillside%2C%20valley%2C%20and%20ridge%29%20can%20reach%2080%5Cu201390%25%2C%20which%20implies%20that%20the%20DEM%20accuracy%20can%20be%20improved%20by%20at%20least%2010%25%20relative%20to%20the%20traditional%20interpolation%20methods%20%28i.e.%2C%20Kriging%2C%20IDW%2C%20and%20Spline%29%2C%20and%20can%20reach%2057.4%25%2C%20while%20other%20deep%20learning%20models%20%28i.e.%2C%20CE%2C%20GL%20and%20CR%29%20only%20reach%2043.2%25%2C%2017.1%25%20and%2011.4%25%20in%20the%20hilly%20areas%2C%20respectively.%20Therefore%2C%20it%20can%20be%20concluded%20that%20the%20structural%20similarity%20surrounding%20the%20terrestrial%20voids%20filled%20using%20the%20model%20proposed%20in%20this%20paper%20can%20reach%2060%5Cu201390%25%20upon%20the%20types%20of%20terrain%2C%20such%20as%20hillside%2C%20valley%2C%20ridge%2C%20and%20hill.%22%2C%22date%22%3A%222022%5C%2F1%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Frs14051206%22%2C%22ISSN%22%3A%222072-4292%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2072-4292%5C%2F14%5C%2F5%5C%2F1206%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A56%3A29Z%22%7D%7D%2C%7B%22key%22%3A%223IQMWBJ9%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhang%20et%20al.%22%2C%22parsedDate%22%3A%222020-12%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZhang%2C%20C.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F9%5C%2F12%5C%2F734%26%23039%3B%26gt%3BDEM%20Void%20Filling%20Based%20on%20Context%20Attention%20Generation%20Model%26lt%3B%5C%2Fa%26gt%3B.%202020%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22DEM%20Void%20Filling%20Based%20on%20Context%20Attention%20Generation%20Model%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chunsen%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shu%22%2C%22lastName%22%3A%22Shi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yingwei%22%2C%22lastName%22%3A%22Ge%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hengheng%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weihong%22%2C%22lastName%22%3A%22Cui%22%7D%5D%2C%22abstractNote%22%3A%22The%20digital%20elevation%20model%20%28DEM%29%20generates%20a%20digital%20simulation%20of%20ground%20terrain%20in%20a%20certain%20range%20with%20the%20usage%20of%203D%20point%20cloud%20data.%20It%20is%20an%20important%20source%20of%20spatial%20modeling%20information.%20Due%20to%20various%20reasons%2C%20however%2C%20the%20generated%20DEM%20has%20data%20holes.%20Based%20on%20the%20algorithm%20of%20deep%20learning%2C%20this%20paper%20aims%20to%20train%20a%20deep%20generation%20model%20%28DGM%29%20to%20complete%20the%20DEM%20void%20filling%20task.%20A%20certain%20amount%20of%20DEM%20data%20and%20a%20randomly%20generated%20mask%20are%20taken%20as%20network%20inputs%2C%20along%20which%20the%20reconstruction%20loss%20and%20generative%20adversarial%20network%20%28GAN%29%20loss%20are%20used%20to%20assist%20network%20training%2C%20so%20as%20to%20perceive%20the%20overall%20known%20elevation%20information%2C%20in%20combination%20with%20the%20contextual%20attention%20layer%2C%20and%20generate%20data%20with%20reliability%20to%20fill%20the%20void%20areas.%20The%20experimental%20results%20have%20managed%20to%20show%20that%20this%20method%20has%20good%20feature%20expression%20and%20reconstruction%20accuracy%20in%20DEM%20void%20filling%2C%20which%20has%20been%20proven%20to%20be%20better%20than%20that%20illustrated%20by%20the%20traditional%20interpolation%20method.%22%2C%22date%22%3A%222020%5C%2F12%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi9120734%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F9%5C%2F12%5C%2F734%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A55%3A25Z%22%7D%7D%2C%7B%22key%22%3A%22KRJNLZSC%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Dong%20et%20al.%22%2C%22parsedDate%22%3A%222020-04%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BDong%2C%20G.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F8789526%26%23039%3B%26gt%3BFilling%20Voids%20in%20Elevation%20Models%20Using%20a%20Shadow-Constrained%20Convolutional%20Neural%20Network%26lt%3B%5C%2Fa%26gt%3B.%202020%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Filling%20Voids%20in%20Elevation%20Models%20Using%20a%20Shadow-Constrained%20Convolutional%20Neural%20Network%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guoshuai%22%2C%22lastName%22%3A%22Dong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weimin%22%2C%22lastName%22%3A%22Huang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22William%20A.%20P.%22%2C%22lastName%22%3A%22Smith%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peng%22%2C%22lastName%22%3A%22Ren%22%7D%5D%2C%22abstractNote%22%3A%22We%20explore%20the%20use%20of%20convolutional%20neural%20networks%20%28CNNs%29%20for%20filling%20voids%20in%20digital%20elevation%20models%20%28DEM%29.%20We%20propose%20a%20baseline%20approach%20using%20a%20fully%20convolutional%20network%20to%20predict%20complete%20from%20incomplete%20DEMs%2C%20which%20is%20trained%20in%20a%20supervised%20fashion.%20We%20then%20extend%20this%20to%20a%20shadow-constrained%20CNN%20%28SCCNN%29%20by%20introducing%20additional%20loss%20functions%20that%20encourage%20the%20restored%20DEM%20to%20adhere%20to%20geometric%20constraints%20implied%20by%20cast%20shadows.%20At%20the%20training%20time%2C%20we%20use%20automatically%20extracted%20cast%20shadow%20maps%20and%20known%20sun%20directions%20to%20compute%20the%20shadow-based%20supervisory%20signal%20in%20addition%20to%20the%20direct%20DEM%20supervision.%20At%20the%20test%20time%2C%20our%20network%20directly%20predicts%20restored%20DEMs%20from%20an%20incomplete%20DEM.%20One%20key%20advantage%20of%20our%20SCCNN%20model%20is%20that%20it%20is%20characterized%20by%20both%20CNN%20data%20inference%20and%20geometric%20shadow%20cues.%20It%20thus%20avoids%20data%20restoration%20that%20may%20violate%20shadowing%20conditions.%20Both%20our%20baseline%20CNN%20and%20SCCNN%20outperform%20the%20inverse%20distance%20weighting%20%28IDW%29-based%20interpolation%20method%2C%20with%20the%20shadow%20supervision%20enabling%20SCCNN%20to%20obtain%20the%20best%20performance.%22%2C%22date%22%3A%222020-04%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FLGRS.2019.2926530%22%2C%22ISSN%22%3A%221558-0571%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F8789526%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A22%3A00Z%22%7D%7D%2C%7B%22key%22%3A%22EAJDRC77%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Gavriil%20et%20al.%22%2C%22parsedDate%22%3A%222019-10%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGavriil%2C%20K.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F8669876%26%23039%3B%26gt%3BVoid%20Filling%20of%20Digital%20Elevation%20Models%20With%20Deep%20Generative%20Models%26lt%3B%5C%2Fa%26gt%3B.%202019%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Void%20Filling%20of%20Digital%20Elevation%20Models%20With%20Deep%20Generative%20Models%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Konstantinos%22%2C%22lastName%22%3A%22Gavriil%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Georg%22%2C%22lastName%22%3A%22Muntingh%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Oliver%20J.%20D.%22%2C%22lastName%22%3A%22Barrowclough%22%7D%5D%2C%22abstractNote%22%3A%22In%20recent%20years%2C%20advances%20in%20machine%20learning%20algorithms%2C%20cheap%20computational%20resources%2C%20and%20the%20availability%20of%20big%20data%20have%20spurred%20the%20deep%20learning%20revolution%20in%20various%20application%20domains.%20In%20particular%2C%20supervised%20learning%20techniques%20in%20image%20analysis%20have%20led%20to%20a%20superhuman%20performance%20in%20various%20tasks%2C%20such%20as%20classification%2C%20localization%2C%20and%20segmentation%2C%20whereas%20unsupervised%20learning%20techniques%20based%20on%20increasingly%20advanced%20generative%20models%20have%20been%20applied%20to%20generate%20high-resolution%20synthetic%20images%20indistinguishable%20from%20real%20images.%20In%20this%20letter%2C%20we%20consider%20a%20state-of-the-art%20machine%20learning%20model%20for%20image%20inpainting%2C%20namely%2C%20a%20Wasserstein%20Generative%20Adversarial%20Network%20based%20on%20a%20fully%20convolutional%20architecture%20with%20a%20contextual%20attention%20mechanism.%20We%20show%20that%20this%20model%20can%20be%20successfully%20transferred%20to%20the%20setting%20of%20digital%20elevation%20models%20for%20the%20purpose%20of%20generating%20semantically%20plausible%20data%20for%20filling%20voids.%20Training%2C%20testing%2C%20and%20experimentation%20are%20done%20on%20GeoTIFF%20data%20from%20various%20regions%20in%20Norway%2C%20made%20openly%20available%20by%20the%20Norwegian%20Mapping%20Authority.%22%2C%22date%22%3A%222019-10%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FLGRS.2019.2902222%22%2C%22ISSN%22%3A%221558-0571%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F8669876%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T20%3A22%3A35Z%22%7D%7D%2C%7B%22key%22%3A%22H6YFLXE6%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Qiu%20et%20al.%22%2C%22parsedDate%22%3A%222019-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BQiu%2C%20Z.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2072-4292%5C%2F11%5C%2F23%5C%2F2829%26%23039%3B%26gt%3BVoid%20Filling%20of%20Digital%20Elevation%20Models%20with%20a%20Terrain%20Texture%20Learning%20Model%20Based%20on%20Generative%20Adversarial%20Networks%26lt%3B%5C%2Fa%26gt%3B.%202019%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Void%20Filling%20of%20Digital%20Elevation%20Models%20with%20a%20Terrain%20Texture%20Learning%20Model%20Based%20on%20Generative%20Adversarial%20Networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhonghang%22%2C%22lastName%22%3A%22Qiu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Linwei%22%2C%22lastName%22%3A%22Yue%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiuguo%22%2C%22lastName%22%3A%22Liu%22%7D%5D%2C%22abstractNote%22%3A%22Digital%20elevation%20models%20%28DEMs%29%20are%20an%20important%20information%20source%20for%20spatial%20modeling.%20However%2C%20data%20voids%2C%20which%20commonly%20exist%20in%20regions%20with%20rugged%20topography%2C%20result%20in%20incomplete%20DEM%20products%2C%20and%20thus%20significantly%20degrade%20DEM%20data%20quality.%20Interpolation%20methods%20are%20commonly%20used%20to%20fill%20voids%20of%20small%20sizes.%20For%20large-scale%20voids%2C%20multi-source%20fusion%20is%20an%20effective%20solution.%20Nevertheless%2C%20high-quality%20auxiliary%20source%20information%20is%20always%20difficult%20to%20retrieve%20in%20rugged%20mountainous%20areas.%20Thus%2C%20the%20void%20filling%20task%20is%20still%20a%20challenge.%20In%20this%20paper%2C%20we%20proposed%20a%20method%20based%20on%20a%20deep%20convolutional%20generative%20adversarial%20network%20%28DCGAN%29%20to%20address%20the%20problem%20of%20DEM%20void%20filling.%20A%20terrain%20texture%20generation%20model%20%28TTGM%29%20was%20constructed%20based%20on%20the%20DCGAN%20framework.%20Elevation%2C%20terrain%20slope%2C%20and%20relief%20degree%20composed%20the%20samples%20in%20the%20training%20set%20to%20better%20depict%20the%20terrain%20textural%20features%20of%20the%20DEM%20data.%20Moreover%2C%20the%20resize-convolution%20was%20utilized%20to%20replace%20the%20traditional%20deconvolution%20process%20to%20overcome%20the%20staircase%20in%20the%20generated%20data.%20The%20TTGM%20was%20trained%20on%20non-void%20SRTM%20%28Shuttle%20Radar%20Topography%20Mission%29%201-arc-second%20data%20patches%20in%20mountainous%20regions%20collected%20across%20the%20globe.%20Then%2C%20information%20neighboring%20the%20voids%20was%20involved%20in%20order%20to%20infer%20the%20latent%20encoding%20for%20the%20missing%20areas%20approximated%20to%20the%20distribution%20of%20training%20data.%20This%20was%20implemented%20with%20a%20loss%20function%20composed%20of%20pixel-wise%2C%20contextual%2C%20and%20perceptual%20constraints%20during%20the%20reconstruction%20process.%20The%20most%20appropriate%20fill%20surface%20generated%20by%20the%20TTGM%20was%20then%20employed%20to%20fill%20the%20voids%2C%20and%20Poisson%20blending%20was%20performed%20as%20a%20postprocessing%20step.%20Two%20models%20with%20different%20input%20sizes%20%2864%20%5Cu00d7%2064%20and%20128%20%5Cu00d7%20128%20pixels%29%20were%20trained%2C%20so%20the%20proposed%20method%20can%20efficiently%20adapt%20to%20different%20sizes%20of%20voids.%20The%20experimental%20results%20indicate%20that%20the%20proposed%20method%20can%20obtain%20results%20with%20good%20visual%20perception%20and%20reconstruction%20accuracy%2C%20and%20is%20superior%20to%20classical%20interpolation%20methods.%22%2C%22date%22%3A%222019%5C%2F1%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Frs11232829%22%2C%22ISSN%22%3A%222072-4292%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2072-4292%5C%2F11%5C%2F23%5C%2F2829%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A38%3A31Z%22%7D%7D%5D%7D
Fang, Z. et al. A topography-aware approach to the automatic generation of urban road networks. 2022
Stölzle, M. et al. Reconstructing Occluded Elevation Information in Terrain Maps With Self-Supervised Learning. 2022
Zhou, G. et al. Voids Filling of DEM with Multiattention Generative Adversarial Network Model. 2022
Zhang, C. et al. DEM Void Filling Based on Context Attention Generation Model. 2020
Gavriil, K. et al. Void Filling of Digital Elevation Models With Deep Generative Models. 2019
Processing Workflows
5447768
processing workflows
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%2265Q36DRP%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20and%20Ning%22%2C%22parsedDate%22%3A%222023-12-08%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLi%2C%20Z.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F17538947.2023.2278895%26%23039%3B%26gt%3BAutonomous%20GIS%3A%20the%20next-generation%20AI-powered%20GIS%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Autonomous%20GIS%3A%20the%20next-generation%20AI-powered%20GIS%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhenlong%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Huan%22%2C%22lastName%22%3A%22Ning%22%7D%5D%2C%22abstractNote%22%3A%22Large%20Language%20Models%20%28LLMs%29%2C%20such%20as%20ChatGPT%2C%20demonstrate%20a%20strong%20understanding%20of%20human%20natural%20language%20and%20have%20been%20explored%20and%20applied%20in%20various%20fields%2C%20including%20reasoning%2C%20creative%20writing%2C%20code%20generation%2C%20translation%2C%20and%20information%20retrieval.%20By%20adopting%20LLM%20as%20the%20reasoning%20core%2C%20we%20introduce%20Autonomous%20GIS%20as%20an%20AI-powered%20geographic%20information%20system%20%28GIS%29%20that%20leverages%20the%20LLM%26%23039%3Bs%20general%20abilities%20in%20natural%20language%20understanding%2C%20reasoning%2C%20and%20coding%20for%20addressing%20spatial%20problems%20with%20automatic%20spatial%20data%20collection%2C%20analysis%2C%20and%20visualization.%20We%20envision%20that%20autonomous%20GIS%20will%20need%20to%20achieve%20five%20autonomous%20goals%3A%20self-generating%2C%20self-organizing%2C%20self-verifying%2C%20self-executing%2C%20and%20self-growing.%20We%20developed%20a%20prototype%20system%20called%20LLM-Geo%20using%20the%20GPT-4%20API%2C%20demonstrating%20what%20an%20autonomous%20GIS%20looks%20like%20and%20how%20it%20delivers%20expected%20results%20without%20human%20intervention%20using%20three%20case%20studies.%20For%20all%20case%20studies%2C%20LLM-Geo%20returned%20accurate%20results%2C%20including%20aggregated%20numbers%2C%20graphs%2C%20and%20maps..%20Although%20still%20in%20its%20infancy%20and%20lacking%20several%20important%20modules%20such%20as%20logging%20and%20code%20testing%2C%20LLM-Geo%20demonstrates%20a%20potential%20path%20toward%20the%20next-generation%20AI-powered%20GIS.%20We%20advocate%20for%20the%20GIScience%20community%20to%20devote%20more%20efforts%20to%20the%20research%20and%20development%20of%20autonomous%20GIS%2C%20making%20spatial%20analysis%20easier%2C%20faster%2C%20and%20more%20accessible%20to%20a%20broader%20audience.%22%2C%22date%22%3A%222023-12-08%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F17538947.2023.2278895%22%2C%22ISSN%22%3A%221753-8947%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F17538947.2023.2278895%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T18%3A41%3A44Z%22%7D%7D%5D%7D
Li, Z. et al. Autonomous GIS: the next-generation AI-powered GIS. 2023
Record Linkage (Addresses)
5447768
record linkage, addresses
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22IEVR3XT2%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20et%20al.%22%2C%22parsedDate%22%3A%222022-06-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLi%2C%20F.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs00521-022-06914-1%26%23039%3B%26gt%3BMulti-task%20deep%20learning%20model%20based%20on%20hierarchical%20relations%20of%20address%20elements%20for%20semantic%20address%20matching%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Multi-task%20deep%20learning%20model%20based%20on%20hierarchical%20relations%20of%20address%20elements%20for%20semantic%20address%20matching%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fangfang%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yiheng%22%2C%22lastName%22%3A%22Lu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xingliang%22%2C%22lastName%22%3A%22Mao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Junwen%22%2C%22lastName%22%3A%22Duan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiyao%22%2C%22lastName%22%3A%22Liu%22%7D%5D%2C%22abstractNote%22%3A%22Address%20matching%2C%20which%20aims%20to%20match%20unstructured%20addresses%20with%20standard%20addresses%20in%20an%20address%20database%2C%20is%20a%20key%20part%20of%20geocoding.%20The%20core%20problem%20of%20address%20matching%20corresponds%20to%20text%20matching%20in%20natural%20language%20processing.%20Existing%20rule-based%20methods%20require%20human-designed%20templates%20and%20thus%2C%20have%20limited%20applicability.%20Machine%20learning%20and%20deep%20learning-based%20methods%20ignore%20the%20hierarchical%20relations%20between%20address%20elements%2C%20which%20easily%20misclassify%20semantically%20similar%20but%20geographically%20different%20locations.%20We%20note%20that%20the%20hierarchy%20of%20address%20elements%20can%20fill%20the%20semantic%20gap%20in%20address%20matching.%20Inspired%20by%20how%20humans%20discriminate%20addresses%2C%20we%20propose%20a%20multi-task%20learning%20approach.%20The%20approach%20jointly%20recognises%20the%20address%20elements%20and%20matches%20the%20addresses%20to%20incorporate%20the%20hierarchical%20relations%20between%20the%20address%20elements%20into%20the%20neural%20network.%20Simultaneously%2C%20we%20introduce%20a%20priori%20information%20on%20the%20hierarchical%20relationship%20of%20address%20elements%20through%20the%20conditional%20random%20field%20model.%20Experimental%20results%20on%20the%20benchmark%20datasets%20Shenzhen%20Address%20Database%20and%20Jiangsu-Hunan%20Address%20Dataset%20demonstrate%20the%20effectiveness%20of%20our%20approach.%20We%20achieved%20state-of-the-art%20F1%20scores%20%28i.e.%20the%20harmonic%20mean%20of%20precision%20and%20recall%29%20of%2099.0%20and%2094.2%20on%20the%20two%20datasets%2C%20respectively.%22%2C%22date%22%3A%222022-06-01%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2Fs00521-022-06914-1%22%2C%22ISSN%22%3A%221433-3058%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs00521-022-06914-1%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A06%3A11Z%22%7D%7D%2C%7B%22key%22%3A%22HC5DQ5RI%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Xu%20et%20al.%22%2C%22parsedDate%22%3A%222022-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BXu%2C%20L.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2076-3417%5C%2F12%5C%2F19%5C%2F10110%26%23039%3B%26gt%3BDeep%20Transfer%20Learning%20Model%20for%20Semantic%20Address%20Matching%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep%20Transfer%20Learning%20Model%20for%20Semantic%20Address%20Matching%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Liuchang%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ruichen%22%2C%22lastName%22%3A%22Mao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chengkun%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yuanyuan%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xinyu%22%2C%22lastName%22%3A%22Zheng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xingyu%22%2C%22lastName%22%3A%22Xue%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fang%22%2C%22lastName%22%3A%22Xia%22%7D%5D%2C%22abstractNote%22%3A%22Address%20matching%2C%20which%20aims%20to%20match%20an%20input%20descriptive%20address%20with%20a%20standard%20address%20in%20an%20address%20database%2C%20is%20a%20key%20technology%20for%20achieving%20data%20spatialization.%20The%20construction%20of%20today%5Cu2019s%20smart%20cities%20depends%20heavily%20on%20the%20precise%20matching%20of%20Chinese%20addresses.%20Existing%20methods%20that%20rely%20on%20rules%20or%20text%20similarity%20struggle%20when%20dealing%20with%20nonstandard%20address%20data.%20Deep-learning-based%20methods%20often%20require%20extracting%20address%20semantics%20for%20embedded%20representation%2C%20which%20not%20only%20complicates%20the%20matching%20process%2C%20but%20also%20affects%20the%20understanding%20of%20address%20semantics.%20Inspired%20by%20deep%20transfer%20learning%2C%20we%20introduce%20an%20address%20matching%20approach%20based%20on%20a%20pretraining%20fine-tuning%20model%20to%20identify%20semantic%20similarities%20between%20various%20addresses.%20We%20first%20pretrain%20the%20address%20corpus%20to%20enable%20the%20address%20semantic%20model%20%28abbreviated%20as%20ASM%29%20to%20learn%20address%20contexts%20unsupervised.%20We%20then%20build%20a%20labelled%20address%20matching%20dataset%20using%20an%20address-specific%20geographical%20feature%2C%20allowing%20the%20matching%20problem%20to%20be%20converted%20into%20a%20binary%20classification%20prediction%20problem.%20Finally%2C%20we%20fine-tune%20the%20ASM%20using%20the%20address%20matching%20dataset%20and%20compare%20the%20output%20with%20several%20popular%20address%20matching%20methods.%20The%20results%20demonstrate%20that%20our%20model%20achieves%20the%20best%20performance%2C%20with%20precision%2C%20recall%2C%20and%20an%20F1%20score%20above%200.98.%22%2C%22date%22%3A%222022%5C%2F1%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fapp121910110%22%2C%22ISSN%22%3A%222076-3417%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2076-3417%5C%2F12%5C%2F19%5C%2F10110%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A48%3A17Z%22%7D%7D%2C%7B%22key%22%3A%22FQRGE2TM%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Cheng%20and%20Chen%22%2C%22parsedDate%22%3A%222021-09-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BCheng%2C%20R.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0952197621002487%26%23039%3B%26gt%3BA%20location%20conversion%20method%20for%20roads%20through%20deep%20learning-based%20semantic%20matching%20and%20simplified%20qualitative%20direction%20knowledge%20representation%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20location%20conversion%20method%20for%20roads%20through%20deep%20learning-based%20semantic%20matching%20and%20simplified%20qualitative%20direction%20knowledge%20representation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ruozhen%22%2C%22lastName%22%3A%22Cheng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jing%22%2C%22lastName%22%3A%22Chen%22%7D%5D%2C%22abstractNote%22%3A%22Qualitative%20direction%20knowledge%20that%20appears%20in%20natural%20language%20descriptions%20of%20road-related%20locations%20could%20point%20to%20the%20interior%20of%20individual%20roads%20or%20associate%20multiple%20roads.%20Interpreting%20such%20descriptions%20to%20perform%20location%20conversion%20for%20roads%20will%20support%20intelligent%20road-related%20location%20services.%20Existing%20geocoding%20technologies%20could%20perform%20textual%20or%20semantic%20matching%20to%20transform%20road%20names%20to%20spatial%20locations%2C%20and%20research%20on%20qualitative%20direction%20reasoning%20could%20perform%20efficient%20location%20conversion%20based%20on%20semantic%20queries%20of%20qualitative%20direction%20knowledge%20between%20roads.%20However%2C%20research%20on%20geocoding%20lacks%20the%20consideration%20of%20matching%20the%20described%20internal%20direction%20knowledge%20of%20a%20road%20to%20a%20part%20of%20the%20road.%20Moreover%2C%20efficient%20location%20conversion%20based%20on%20semantic%20queries%20cannot%20scale%20to%20large%20road%20datasets%20due%20to%20the%20retrieval%20efficiency%20of%20a%20large%20amount%20of%20qualitative%20direction%20knowledge%20between%20roads.%20To%20accomplish%20this%20goal%2C%20this%20study%20proposes%20a%20location%20conversion%20method%20for%20roads%2C%20wherein%20a%20road%20ontology%20is%20designed%20to%20model%20the%20interior%20direction%20knowledge%20of%20the%20roads%2C%20a%20deep%20learning-based%20road%20semantic%20matching%20model%20is%20trained%20to%20match%20the%20internal%20direction%20knowledge%20descriptions%20and%20road%20segments%2C%20and%20a%20simplified%20qualitative%20direction%20knowledge%20representation%20between%20roads%20is%20performed%20to%20support%20rapid%20location%20conversion%20between%20roads%20based%20on%20efficient%20semantic%20queries.%20The%20proposed%20method%20was%20implemented%20on%20a%20road%20dataset%20of%20New%20York%20State.%20The%20results%20demonstrate%20that%20the%20proposed%20method%20can%20be%20effectively%20applied%20in%20road%20location%20conversion%20based%20on%20descriptions%20that%20contain%20qualitative%20direction%20knowledge%20inside%20individual%20roads%20or%20between%20multiple%20roads%2C%20which%20expands%20the%20scope%20of%20artificial%20intelligence%20applications.%22%2C%22date%22%3A%222021-09-01%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.engappai.2021.104400%22%2C%22ISSN%22%3A%220952-1976%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0952197621002487%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T21%3A01%3A30Z%22%7D%7D%2C%7B%22key%22%3A%224PB9X3X3%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Park%20et%20al.%22%2C%22parsedDate%22%3A%222021-03-22%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BPark%2C%20S.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3412841.3441969%26%23039%3B%26gt%3BBertLoc%3A%20duplicate%20location%20record%20detection%20in%20a%20large-scale%20location%20dataset%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22BertLoc%3A%20duplicate%20location%20record%20detection%20in%20a%20large-scale%20location%20dataset%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sujin%22%2C%22lastName%22%3A%22Park%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sangwon%22%2C%22lastName%22%3A%22Lee%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Simon%20S.%22%2C%22lastName%22%3A%22Woo%22%7D%5D%2C%22abstractNote%22%3A%22Due%20to%20a%20significant%20increase%20in%20the%20number%20of%20location%20services%20as%20well%20as%20services%20which%20rely%20on%20location%20information%20such%20as%20real-time%20maps%2C%20there%20is%20an%20enormous%20need%20to%20provide%20accurate%20location%20information%20to%20end%20users.%20In%20order%20to%20acquire%20the%20location%20records%2C%20generally%2C%20users%20or%20other%20systems%20initiate%20the%20location%20query%20to%20the%20location%20search%20engine%2C%20and%20the%20location%20search%20engine%20provides%20the%20best%20matching%20results.%20However%2C%20there%20are%20often%20inconsistency%2C%20noise%2C%20and%20ambiguity%20in%20the%20location%20datasets.%20In%20particular%2C%20there%20are%20many%20cases%20where%20the%20same%20location%20is%20recorded%20as%20different%20names%20from%20varying%20data%20sources%2C%20which%20can%20not%20only%20confuse%20users%2C%20but%20also%20introduce%20inaccurate%20results.%20Therefore%2C%20detecting%20the%20duplicate%20location%20information%20in%20a%20large%20database%20as%20well%20as%20accurately%20merging%20them%20into%20a%20single%20location%20record%20are%20critical.%20In%20this%20work%2C%20we%20propose%20BertLoc%2C%20a%20novel%20deep%20learning-based%20architecture%20to%20detect%20the%20duplicate%20location%20represented%20in%20different%20ways%20%28e.g.%2C%20Cafe%20vs.%20Coffee%20House%29%20and%20effectively%20merge%20them%20into%20a%20single%20and%20consistent%20location%20record.%20BertLoc%20is%20based%20on%20Multilingual%20Bert%20Model%20followed%20by%20BiLSTM%20and%20CNN%20to%20effectively%20compare%20and%20determine%20whether%20given%20location%20strings%20are%20the%20same%20location%20or%20not.%20We%20evaluate%20BertLoc%20trained%20with%20more%20than%20half%20a%20million%20location%20data%20used%20in%20real%20service%20in%20South%20Korea%20and%20compare%20the%20results%20with%20other%20popular%20baseline%20methods.%20Our%20experimental%20results%20show%20that%20BertLoc%20outperforms%20other%20popular%20baseline%20methods%20with%200.952%20F1-score%2C%20and%20shows%20great%20promise%20in%20detecting%20duplicate%20records%20in%20a%20large-scale%20location%20dataset.%22%2C%22date%22%3A%22March%2022%2C%202021%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%2036th%20Annual%20ACM%20Symposium%20on%20Applied%20Computing%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3412841.3441969%22%2C%22ISBN%22%3A%22978-1-4503-8104-8%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3412841.3441969%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A32%3A14Z%22%7D%7D%2C%7B%22key%22%3A%22XWNGKIEF%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Chen%20et%20al.%22%2C%22parsedDate%22%3A%222021-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BChen%2C%20J.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2076-3417%5C%2F11%5C%2F16%5C%2F7608%26%23039%3B%26gt%3BDeep%20Contrast%20Learning%20Approach%20for%20Address%20Semantic%20Matching%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep%20Contrast%20Learning%20Approach%20for%20Address%20Semantic%20Matching%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jian%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jianpeng%22%2C%22lastName%22%3A%22Chen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiangrong%22%2C%22lastName%22%3A%22She%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jian%22%2C%22lastName%22%3A%22Mao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gang%22%2C%22lastName%22%3A%22Chen%22%7D%5D%2C%22abstractNote%22%3A%22Address%20is%20a%20structured%20description%20used%20to%20identify%20a%20specific%20place%20or%20point%20of%20interest%2C%20and%20it%20provides%20an%20effective%20way%20to%20locate%20people%20or%20objects.%20The%20standardization%20of%20Chinese%20place%20name%20and%20address%20occupies%20an%20important%20position%20in%20the%20construction%20of%20a%20smart%20city.%20Traditional%20address%20specification%20technology%20often%20adopts%20methods%20based%20on%20text%20similarity%20or%20rule%20bases%2C%20which%20cannot%20handle%20complex%2C%20missing%2C%20and%20redundant%20address%20information%20well.%20This%20paper%20transforms%20the%20task%20of%20address%20standardization%20into%20calculating%20the%20similarity%20of%20address%20pairs%2C%20and%20proposes%20a%20contrast%20learning%20address%20matching%20model%20based%20on%20the%20attention-Bi-LSTM-CNN%20network%20%28ABLC%29.%20First%20of%20all%2C%20ABLC%20use%20the%20Trie%20syntax%20tree%20algorithm%20to%20extract%20Chinese%20address%20elements.%20Next%2C%20based%20on%20the%20basic%20idea%20of%20contrast%20learning%2C%20a%20hybrid%20neural%20network%20is%20applied%20to%20learn%20the%20semantic%20information%20in%20the%20address.%20Finally%2C%20Manhattan%20distance%20is%20calculated%20as%20the%20similarity%20of%20the%20two%20addresses.%20Experiments%20on%20the%20self-constructed%20dataset%20with%20data%20augmentation%20demonstrate%20that%20the%20proposed%20model%20has%20better%20stability%20and%20performance%20compared%20with%20other%20baselines.%22%2C%22date%22%3A%222021%5C%2F1%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fapp11167608%22%2C%22ISSN%22%3A%222076-3417%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2076-3417%5C%2F11%5C%2F16%5C%2F7608%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A42%3A58Z%22%7D%7D%2C%7B%22key%22%3A%22LL5RE6FD%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Lin%20et%20al.%22%2C%22parsedDate%22%3A%222020-03-03%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLin%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2019.1681431%26%23039%3B%26gt%3BA%20deep%20learning%20architecture%20for%20semantic%20address%20matching%26lt%3B%5C%2Fa%26gt%3B.%202020%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20deep%20learning%20architecture%20for%20semantic%20address%20matching%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yue%22%2C%22lastName%22%3A%22Lin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mengjun%22%2C%22lastName%22%3A%22Kang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yuyang%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qingyun%22%2C%22lastName%22%3A%22Du%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tao%22%2C%22lastName%22%3A%22Liu%22%7D%5D%2C%22abstractNote%22%3A%22Address%20matching%20is%20a%20crucial%20step%20in%20geocoding%2C%20which%20plays%20an%20important%20role%20in%20urban%20planning%20and%20management.%20To%20date%2C%20the%20unprecedented%20development%20of%20location-based%20services%20has%20generated%20a%20large%20amount%20of%20unstructured%20address%20data.%20Traditional%20address%20matching%20methods%20mainly%20focus%20on%20the%20literal%20similarity%20of%20address%20records%20and%20are%20therefore%20not%20applicable%20to%20the%20unstructured%20address%20data.%20In%20this%20study%2C%20we%20introduce%20an%20address%20matching%20method%20based%20on%20deep%20learning%20to%20identify%20the%20semantic%20similarity%20between%20address%20records.%20First%2C%20we%20train%20the%20word2vec%20model%20to%20transform%20the%20address%20records%20into%20their%20corresponding%20vector%20representations.%20Next%2C%20we%20apply%20the%20enhanced%20sequential%20inference%20model%20%28ESIM%29%2C%20a%20deep%20text-matching%20model%2C%20to%20make%20local%20and%20global%20inferences%20to%20determine%20if%20two%20addresses%20match.%20To%20evaluate%20the%20accuracy%20of%20the%20proposed%20method%2C%20we%20fine-tune%20the%20model%20with%20real-world%20address%20data%20from%20the%20Shenzhen%20Address%20Database%20and%20compare%20the%20outputs%20with%20those%20of%20several%20popular%20address%20matching%20methods.%20The%20results%20indicate%20that%20the%20proposed%20method%20achieves%20a%20higher%20matching%20accuracy%20for%20unstructured%20address%20records%2C%20with%20its%20precision%2C%20recall%2C%20and%20F1%20score%20%28i.e.%2C%20the%20harmonic%20mean%20of%20precision%20and%20recall%29%20reaching%200.97%20on%20the%20test%20set.%22%2C%22date%22%3A%222020-03-03%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2019.1681431%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2019.1681431%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A08%3A08Z%22%7D%7D%5D%7D
Xu, L. et al. Deep Transfer Learning Model for Semantic Address Matching. 2022
Park, S. et al. BertLoc: duplicate location record detection in a large-scale location dataset. 2021
Chen, J. et al. Deep Contrast Learning Approach for Address Semantic Matching. 2021
Lin, Y. et al. A deep learning architecture for semantic address matching. 2020
Record Linkage (Toponyms)
5447768
record linkage, toponyms
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22XPWTKW5H%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Hu%20et%20al.%22%2C%22parsedDate%22%3A%222024%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BHu%2C%20X.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2024.2405182%26%23039%3B%26gt%3BToponym%20resolution%20leveraging%20lightweight%20and%20open-source%20large%20language%20models%20and%20geo-knowledge%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Toponym%20resolution%20leveraging%20lightweight%20and%20open-source%20large%20language%20models%20and%20geo-knowledge%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xuke%22%2C%22lastName%22%3A%22Hu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jens%22%2C%22lastName%22%3A%22Kersten%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Friederike%22%2C%22lastName%22%3A%22Klan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sheikh%20Mastura%22%2C%22lastName%22%3A%22Farzana%22%7D%5D%2C%22abstractNote%22%3A%22Toponym%20resolution%20is%20crucial%20for%20extracting%20geographic%20information%20from%20natural%20language%20texts%2C%20such%20as%20social%20media%20posts%20and%20news%20articles.%20Despite%20the%20advancements%20in%20current%20methods%2C%20including%20state-of-the-art%20deep%20learning%20solutions%20like%20GENRE%20and%20a%20sophisticated%20voting%20system%20that%20integrates%20seven%20individual%20methods%2C%20further%20enhancing%20their%20accuracy%20is%20essential.%20To%20achieve%20this%20goal%2C%20we%20propose%20a%20novel%20method%20that%20combines%20lightweight%20and%20open-source%20large%20language%20models%20and%20geo-knowledge.%20Specifically%2C%20we%20first%20fine-tune%20Mistral%20%287B%29%2C%20Baichuan2%20%287B%29%2C%20Llama2%20%287B%20%26amp%3B%2013B%29%2C%20and%20Falcon%20%287B%29%20to%20estimate%20toponyms%5Cu2019%20unambiguous%20reference%20%28e.g.%2C%20city%2C%20state%2C%20country%29%20given%20their%20contexts.%20Subsequently%2C%20we%20correct%20inaccuracies%20in%20generated%20references%20and%20determine%20their%20geo-coordinates%20via%20sequentially%20querying%20GeoNames%2C%20Nominatim%2C%20and%20ArcGIS%20geocoders%20until%20a%20successful%20geocoding%20result%20is%20achieved.%20Our%20methods%20demonstrate%20enhanced%20performance%20compared%20to%2020%20existing%20methods%2C%20as%20evidenced%20across%20seven%20challenging%20datasets%20including%2083%2C365%20toponyms%20worldwide%2C%20with%20the%20Mistral-based%20method%20leading%2C%20followed%20by%20Baichuan2%2C%20Llama2%2C%20and%20Falcon-based%20methods.%20Specifically%2C%20the%20Mistral-based%20method%20achieves%20an%20Accuracy%40161km%20of%200.91%2C%20surpassing%20GENRE%2C%20the%20best%20individual%20method%2C%20by%2017%25%20and%20the%20seven-methods%20composite%20voting%20system%20by%207%25.%20Moreover%2C%20our%20methods%20are%20computationally%20efficient%2C%20operable%20on%20one%20general%20GPU%2C%20have%20modest%20memory%20requirements%20%2814%20GB%20for%207B%20models%20and%2027%20GB%20for%2013B%20models%29%2C%20and%20exceed%20both%20GENRE%20and%20the%20voting%20system%20in%20inferring%20speed.%22%2C%22date%22%3A%222024%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2024.2405182%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2024.2405182%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-11-21T14%3A09%3A42Z%22%7D%7D%2C%7B%22key%22%3A%2292V866CJ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Fize%20et%20al.%22%2C%22parsedDate%22%3A%222021-12%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BFize%2C%20J.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F10%5C%2F12%5C%2F818%26%23039%3B%26gt%3BDeep%20Learning%20for%20Toponym%20Resolution%3A%20Geocoding%20Based%20on%20Pairs%20of%20Toponyms%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep%20Learning%20for%20Toponym%20Resolution%3A%20Geocoding%20Based%20on%20Pairs%20of%20Toponyms%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jacques%22%2C%22lastName%22%3A%22Fize%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ludovic%22%2C%22lastName%22%3A%22Moncla%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bruno%22%2C%22lastName%22%3A%22Martins%22%7D%5D%2C%22abstractNote%22%3A%22Geocoding%20aims%20to%20assign%20unambiguous%20locations%20%28i.e.%2C%20geographic%20coordinates%29%20to%20place%20names%20%28i.e.%2C%20toponyms%29%20referenced%20within%20documents%20%28e.g.%2C%20within%20spreadsheet%20tables%20or%20textual%20paragraphs%29.%20This%20task%20comes%20with%20multiple%20challenges%2C%20such%20as%20dealing%20with%20referent%20ambiguity%20%28multiple%20places%20with%20a%20same%20name%29%20or%20reference%20database%20completeness.%20In%20this%20work%2C%20we%20propose%20a%20geocoding%20approach%20based%20on%20modeling%20pairs%20of%20toponyms%2C%20which%20returns%20latitude-longitude%20coordinates.%20One%20of%20the%20input%20toponyms%20will%20be%20geocoded%2C%20and%20the%20second%20one%20is%20used%20as%20context%20to%20reduce%20ambiguities.%20The%20proposed%20approach%20is%20based%20on%20a%20deep%20neural%20network%20that%20uses%20Long%20Short-Term%20Memory%20%28LSTM%29%20units%20to%20produce%20representations%20from%20sequences%20of%20character%20n-grams.%20To%20train%20our%20model%2C%20we%20use%20toponym%20co-occurrences%20collected%20from%20different%20contexts%2C%20namely%20textual%20%28i.e.%2C%20co-occurrences%20of%20toponyms%20in%20Wikipedia%20articles%29%20and%20geographical%20%28i.e.%2C%20inclusion%20and%20proximity%20of%20places%20based%20on%20Geonames%20data%29.%20Experiments%20based%20on%20multiple%20geographical%20areas%20of%20interest%5Cu2014France%2C%20United%20States%2C%20Great-Britain%2C%20Nigeria%2C%20Argentina%20and%20Japan%5Cu2014were%20conducted.%20Results%20show%20that%20models%20trained%20with%20co-occurrence%20data%20obtained%20a%20higher%20geocoding%20accuracy%2C%20and%20that%20proximity%20relations%20in%20combination%20with%20co-occurrences%20can%20help%20to%20obtain%20a%20slightly%20higher%20accuracy%20in%20geographical%20areas%20with%20fewer%20places%20in%20the%20data%20sources.%22%2C%22date%22%3A%222021%5C%2F12%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi10120818%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F10%5C%2F12%5C%2F818%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T19%3A56%3A28Z%22%7D%7D%2C%7B%22key%22%3A%22LL8VE6JY%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Alexis%20et%20al.%22%2C%22parsedDate%22%3A%222020-06-14%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BAlexis%2C%20K.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3403896.3403970%26%23039%3B%26gt%3BBoosting%20toponym%20interlinking%20by%20paying%20attention%20to%20both%20machine%20and%20deep%20learning%26lt%3B%5C%2Fa%26gt%3B.%202020%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Boosting%20toponym%20interlinking%20by%20paying%20attention%20to%20both%20machine%20and%20deep%20learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Konstantinos%22%2C%22lastName%22%3A%22Alexis%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Vassilis%22%2C%22lastName%22%3A%22Kaffes%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Giorgos%22%2C%22lastName%22%3A%22Giannopoulos%22%7D%5D%2C%22abstractNote%22%3A%22Toponym%20interlinking%20is%20the%20problem%20of%20identifying%20same%20spatio-textual%20entities%20within%20two%20or%20more%20different%20data%20sources%2C%20based%20exclusively%20on%20their%20names.%20It%20comprises%20a%20significant%20task%20in%20geospatial%20data%20management%20and%20integration%20with%20application%20in%20fields%20such%20as%20geomarketing%2C%20cadastration%2C%20navigation%2C%20etc.%20Previous%20works%20have%20assessed%20the%20effectiveness%20of%20unsupervised%20string%20similarity%20functions%2C%20while%20more%20recent%20ones%20have%20deployed%20similarity-based%20Machine%20Learning%20techniques%20and%20language%20model-based%20Deep%20Learning%20techniques%2C%20achieving%20significantly%20higher%20interlinking%20accuracy.%20In%20this%20paper%2C%20we%20demonstrate%20the%20suitability%20of%20Attentionbased%20neural%20networks%20on%20the%20problem%2C%20as%20well%20as%20the%20fact%20that%20all%20different%20approaches%20provide%20merit%20to%20the%20problem%2C%20proposing%20a%20hybrid%20scheme%20that%20achieves%20the%20highest%20accuracy%20reported%20on%20toponym%20interlinking%20on%20the%20widely%20used%20Geonames%20dataset.%22%2C%22date%22%3A%222020-06-14%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%20Sixth%20International%20ACM%20SIGMOD%20Workshop%20on%20Managing%20and%20Mining%20Enriched%20Geo-Spatial%20Data%22%2C%22conferenceName%22%3A%22SIGMOD%5C%2FPODS%20%2720%3A%20International%20Conference%20on%20Management%20of%20Data%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1145%5C%2F3403896.3403970%22%2C%22ISBN%22%3A%22978-1-4503-8035-5%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3403896.3403970%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A58%3A02Z%22%7D%7D%2C%7B%22key%22%3A%22QLPTJUNI%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Santos%20et%20al.%22%2C%22parsedDate%22%3A%222018-02-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSantos%2C%20R.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2017.1390119%26%23039%3B%26gt%3BToponym%20matching%20through%20deep%20neural%20networks%26lt%3B%5C%2Fa%26gt%3B.%202018%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Toponym%20matching%20through%20deep%20neural%20networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rui%22%2C%22lastName%22%3A%22Santos%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Patricia%22%2C%22lastName%22%3A%22Murrieta-Flores%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22P%5Cu00e1vel%22%2C%22lastName%22%3A%22Calado%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bruno%22%2C%22lastName%22%3A%22Martins%22%7D%5D%2C%22abstractNote%22%3A%22Toponym%20matching%2C%20i.e.%20pairing%20strings%20that%20represent%20the%20same%20real-world%20location%2C%20is%20a%20fundamental%20problemfor%20several%20practical%20applications.%20The%20current%20state-of-the-art%20relies%20on%20string%20similarity%20metrics%2C%20either%20specifically%20developed%20for%20matching%20place%20names%20or%20integrated%20within%20methods%20that%20combine%20multiple%20metrics.%20However%2C%20these%20methods%20all%20rely%20on%20common%20sub-strings%20in%20order%20to%20establish%20similarity%2C%20and%20they%20do%20not%20effectively%20capture%20the%20character%20replacements%20involved%20in%20toponym%20changes%20due%20to%20transliterations%20or%20to%20changes%20in%20language%20and%20culture%20over%20time.%20In%20this%20article%2C%20we%20present%20a%20novel%20matching%20approach%2C%20leveraging%20a%20deep%20neural%20network%20to%20classify%20pairs%20of%20toponyms%20as%20either%20matching%20or%20nonmatching.%20The%20proposed%20network%20architecture%20uses%20recurrent%20nodes%20to%20build%20representations%20from%20the%20sequences%20of%20bytes%20that%20correspond%20to%20the%20strings%20that%20are%20to%20be%20matched.%20These%20representations%20are%20then%20combined%20and%20passed%20to%20feed-forward%20nodes%2C%20finally%20leading%20to%20a%20classification%20decision.%20We%20present%20the%20results%20of%20a%20wide-ranging%20evaluation%20on%20the%20performance%20of%20the%20proposed%20method%2C%20using%20a%20large%20dataset%20collected%20from%20the%20GeoNames%20gazetteer.%20These%20results%20show%20that%20the%20proposed%20method%20can%20significantly%20outperform%20individual%20similarity%20metrics%20from%20previous%20studies%2C%20as%20well%20as%20previous%20methods%20based%20on%20supervised%20machine%20learning%20for%20combining%20multiple%20metrics.%22%2C%22date%22%3A%222018-02-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F13658816.2017.1390119%22%2C%22ISSN%22%3A%221365-8816%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F13658816.2017.1390119%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-16T20%3A40%3A32Z%22%7D%7D%5D%7D
Fize, J. et al. Deep Learning for Toponym Resolution: Geocoding Based on Pairs of Toponyms. 2021
Alexis, K. et al. Boosting toponym interlinking by paying attention to both machine and deep learning. 2020
Santos, R. et al. Toponym matching through deep neural networks. 2018
Data Structures
5447768
data structures
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22DL7YR7NV%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhang%20et%20al.%22%2C%22parsedDate%22%3A%222021-11-08%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZhang%2C%20Z.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3486640.3491393%26%23039%3B%26gt%3BAn%20Al-based%20Spatial%20Knowledge%20Graph%20for%20Enhancing%20Spatial%20Data%20and%20Knowledge%20Search%20and%20Discovery%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22An%20Al-based%20Spatial%20Knowledge%20Graph%20for%20Enhancing%20Spatial%20Data%20and%20Knowledge%20Search%20and%20Discovery%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhe%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhangyang%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Angela%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xinyue%22%2C%22lastName%22%3A%22Ye%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22E.%20Lynn%22%2C%22lastName%22%3A%22Usery%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Diya%22%2C%22lastName%22%3A%22Li%22%7D%5D%2C%22abstractNote%22%3A%22Geospatial%20data%20has%20been%20widely%20used%20in%20Geographic%20Information%20Systems%20to%20understand%20spatial%20relationships%20in%20various%20application%20domains%20such%20as%20disaster%20response%2C%20agriculture%20risk%20management%2C%20environmental%20planning%2C%20and%20water%20resource%20protection.%20Many%20data%20sharing%20platforms%20such%20as%20NASA%20Open%20Data%20Portal%20and%20USGS%20Geo%20Data%20portal%20have%20been%20developed%20to%20enhance%20spatial%20data%20sharing%20services.%20However%2C%20enabling%20intelligent%20and%20efficient%20spatial%20data%20sharing%20and%20communication%20across%20different%20domains%20and%20stakeholders%20%28e.g.%2C%20data%20producers%2C%20researchers%2C%20and%20domain%20experts%29%20is%20a%20formidable%20task.%20The%20challenges%20appear%20in%20building%20meaningful%20semantics%20between%20data%20products%20using%20spatiotemporal%20similarity%20measures%20to%20efficiently%20help%20users%20find%20all%20the%20relevant%20data%20and%20information%20at%20the%20space-time%20scale.%20In%20this%20paper%2C%20we%20developed%20a%20novel%20AI-based%20graph%20embedding%20algorithm%20to%20build%20semantic%20relationships%20between%20different%20spatial%20data%20sets%20to%20enable%20efficient%20and%20accurate%20data%20search.%20We%20applied%20the%20graph%20embedding%20algorithm%20to%2030%2C000%20NASA%20metadata%20records%20to%20test%20our%20algorithm%26%23039%3Bs%20performance.%20In%20the%20end%2C%20we%20visualized%20the%20knowledge%20graph%20using%20the%20Neo4j%20database%20graphical%20user%20interface.%22%2C%22date%22%3A%22November%208%2C%202021%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%201st%20ACM%20SIGSPATIAL%20International%20Workshop%20on%20Searching%20and%20Mining%20Large%20Collections%20of%20Geospatial%20Data%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3486640.3491393%22%2C%22ISBN%22%3A%22978-1-4503-9123-8%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3486640.3491393%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T17%3A53%3A50Z%22%7D%7D%5D%7D
Wayfinding and Routing
5447768
wayfinding
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22UXPT8UK7%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Hei%20et%20al.%22%2C%22parsedDate%22%3A%222023-01-02%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BHei%2C%20Q.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2022.2154271%26%23039%3B%26gt%3BDetecting%20dynamic%20visual%20attention%20in%20augmented%20reality%20aided%20navigation%20environment%20based%20on%20a%20multi-feature%20integration%20fully%20convolutional%20network%26lt%3B%5C%2Fa%26gt%3B.%202023%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Detecting%20dynamic%20visual%20attention%20in%20augmented%20reality%20aided%20navigation%20environment%20based%20on%20a%20multi-feature%20integration%20fully%20convolutional%20network%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Qiaosong%22%2C%22lastName%22%3A%22Hei%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Weihua%22%2C%22lastName%22%3A%22Dong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bowen%22%2C%22lastName%22%3A%22Shi%22%7D%5D%2C%22abstractNote%22%3A%22Visual%20attention%20detection%2C%20as%20an%20important%20concept%20for%20human%20visual%20behavior%20research%2C%20has%20been%20widely%20studied.%20However%2C%20previous%20studies%20seldom%20considered%20the%20feature%20integration%20mechanism%20to%20detect%20visual%20attention%20and%20rarely%20considered%20the%20differences%20due%20to%20different%20geographical%20scenes.%20In%20this%20paper%2C%20we%20use%20an%20augmented%20reality%20aided%20%28AR-aided%29%20navigation%20experimental%20dataset%20to%20study%20human%20visual%20behavior%20in%20a%20dynamic%20AR-aided%20environment.%20Then%2C%20we%20propose%20a%20multi-feature%20integration%20fully%20convolutional%20network%20%28M-FCN%29%20based%20on%20a%20self-adaptive%20environment%20weight%20%28SEW%29%20to%20integrate%20RGB-D%2C%20semantic%2C%20optical%20flow%20and%20spatial%20neighborhood%20features%20to%20detect%20human%20visual%20attention.%20The%20result%20shows%20that%20the%20M-FCN%20performs%20better%20than%20other%20state-of-the-art%20saliency%20models.%20In%20addition%2C%20the%20introduction%20of%20feature%20integration%20mechanism%20and%20the%20SEW%20can%20improve%20the%20accuracy%20and%20robustness%20of%20visual%20attention%20detection.%20Meanwhile%2C%20we%20find%20that%20RGB-D%20and%20semantic%20features%20perform%20best%20in%20different%20road%20routes%20and%20road%20types%2C%20but%20with%20the%20increase%20in%20road%20type%20complexity%2C%20the%20expressiveness%20of%20these%20two%20features%20weakens%2C%20and%20the%20expressiveness%20of%20optical%20flow%20and%20spatial%20neighborhood%20features%20increases.%20The%20research%20is%20helpful%20for%20AR-device%20navigation%20tool%20design%20and%20urban%20spatial%20planning.%22%2C%22date%22%3A%222023-01-02%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2022.2154271%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2022.2154271%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-03-18T13%3A29%3A02Z%22%7D%7D%2C%7B%22key%22%3A%22HXQYGACQ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Liu%20et%20al.%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLiu%2C%20Z.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9896986%26%23039%3B%26gt%3BDeepGPS%3A%20Deep%20Learning%20Enhanced%20GPS%20Positioning%20in%20Urban%20Canyons%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22DeepGPS%3A%20Deep%20Learning%20Enhanced%20GPS%20Positioning%20in%20Urban%20Canyons%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhidan%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiancong%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Xiaowen%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kaishun%22%2C%22lastName%22%3A%22Wu%22%7D%5D%2C%22abstractNote%22%3A%22Global%20Positioning%20System%20%28GPS%29%20has%20benefited%20many%20novel%20applications%2C%20e.g.%2C%20navigation%2C%20ride-sharing%2C%20and%20location-based%20services%2C%20in%20our%20daily%20life.%20Although%20GPS%20works%20well%20in%20most%20places%2C%20its%20performance%20in%20urban%20canyons%20is%20well-known%20poor%2C%20due%20to%20the%20signal%20reflections%20of%20non-line-of-sight%20%28NLOS%29%20satellites.%20Tremendous%20efforts%20have%20been%20made%20to%20mitigate%20the%20impacts%20of%20NLOS%20signals%2C%20while%20previous%20works%20heavily%20rely%20on%20precise%20proprietary%203D%20city%20models%20or%20other%20third-party%20resources%2C%20which%20are%20not%20easily%20accessible.%20In%20this%20paper%2C%20we%20present%20DeepGPS%20%2C%20a%20deep%20learning%20enhanced%20GPS%20positioning%20system%20that%20can%20correct%20GPS%20estimations%20by%20only%20considering%20some%20simple%20contextual%20information.%20DeepGPS%20fuses%20environmental%20factors%2C%20including%20building%20heights%20and%20road%20distribution%20around%20GPS%26%23039%3Bs%20initial%20position%2C%20and%20satellite%20statuses%20to%20describe%20the%20positioning%20context%2C%20and%20exploits%20an%20encoder-decoder%20network%20model%20to%20implicitly%20learn%20the%20complex%20relationships%20between%20positioning%20contexts%20and%20GPS%20estimations%20from%20massive%20labeled%20GPS%20samples.%20As%20a%20result%2C%20the%20well-trained%20model%20can%20accurately%20predict%20the%20correct%20position%20for%20each%20erroneous%20GPS%20estimation%20given%20its%20positioning%20context.%20We%20further%20improve%20the%20model%20with%20a%20novel%20constraint%20mask%20to%20filter%20out%20invalid%20candidate%20locations%2C%20and%20enable%20continuous%20localization%20with%20a%20simple%20mobility%20model.%20A%20prototype%20system%20is%20implemented%20and%20experimentally%20evaluated%20using%20a%20large-scale%20bus%20trajectory%20dataset%20and%20real-field%20GPS%20measurements.%20Experimental%20results%20demonstrate%20that%20DeepGPS%20significantly%20enhances%20GPS%20performance%20in%20urban%20canyons%2C%20e.g.%2C%20on%20average%20effectively%20correcting%2090.1%25%20GPS%20estimations%20with%20accuracy%20improvement%20by%2064.6%25.%22%2C%22date%22%3A%222022%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FTMC.2022.3208240%22%2C%22ISSN%22%3A%221558-0660%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F9896986%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-07-31T18%3A35%3A04Z%22%7D%7D%5D%7D
Liu, Z. et al. DeepGPS: Deep Learning Enhanced GPS Positioning in Urban Canyons. 2022
Recommender Systems
5447768
recommender systems
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%229N4R3BYV%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Pramanik%20et%20al.%22%2C%22parsedDate%22%3A%222020-11%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BPramanik%2C%20S.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fabstract%5C%2Fdocument%5C%2F8709774%26%23039%3B%26gt%3BDeep%20Learning%20Driven%20Venue%20Recommender%20for%20Event-Based%20Social%20Networks%26lt%3B%5C%2Fa%26gt%3B.%202020%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep%20Learning%20Driven%20Venue%20Recommender%20for%20Event-Based%20Social%20Networks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Soumajit%22%2C%22lastName%22%3A%22Pramanik%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rajarshi%22%2C%22lastName%22%3A%22Haldar%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Anand%22%2C%22lastName%22%3A%22Kumar%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sayan%22%2C%22lastName%22%3A%22Pathak%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bivas%22%2C%22lastName%22%3A%22Mitra%22%7D%5D%2C%22abstractNote%22%3A%22Event-based%20online%20social%20platforms%2C%20such%20as%20Meetup%20and%20Plancast%2C%20have%20experienced%20increased%20popularity%20and%20rapid%20growth%20in%20recent%20years.%20In%20EBSN%20setup%2C%20selecting%20suitable%20venues%20for%20hosting%20events%2C%20which%20can%20attract%20a%20great%20turnout%2C%20is%20a%20key%20challenge.%20In%20this%20paper%2C%20we%20present%20a%20deep%20learning%20based%20venue%20recommendation%20system%20DeepVenue%20which%20provides%20context%20driven%20venue%20recommendations%20for%20the%20Meetup%20event-hosts%20to%20host%20their%20events.%20The%20crux%20of%20the%20proposed%20model%20relies%20on%20the%20notion%20of%20similarity%20between%20multiple%20Meetup%20entities%20such%20as%20events%2C%20venues%2C%20groups%2C%20etc.%20We%20develop%20deep%20learning%20techniques%20to%20compute%20a%20compact%20descriptor%20for%20each%20entity%2C%20such%20that%20two%20entities%20%28say%2C%20venues%29%20can%20be%20compared%20numerically.%20Notably%2C%20to%20mitigate%20the%20scarcity%20of%20venue%20related%20information%20in%20Meetup%2C%20we%20leverage%20on%20the%20cross%20domain%20knowledge%20transfer%20from%20popular%20LBSN%20service%20Yelp%20to%20extract%20rich%20venue%20related%20content.%20For%20hosting%20an%20event%2C%20the%20proposed%20DeepVenue%20model%20computes%20a%20success%20score%20for%20each%20candidate%20venue%20and%20ranks%20those%20venues%20according%20to%20the%20scores%20and%20finally%20recommend%20the%20top%20k%20venues.%20Our%20rigorous%20evaluation%20on%20the%20Meetup%20data%20collected%20for%20the%20city%20of%20Chicago%20shows%20that%20DeepVenue%20significantly%20outperforms%20the%20baselines%20algorithms.%20Precisely%2C%20for%2084%20percent%20of%20events%2C%20the%20correct%20hosting%20venue%20appears%20in%20the%20top%205%20of%20the%20DeepVenue%20recommended%20list.%22%2C%22date%22%3A%222020-11%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FTKDE.2019.2915523%22%2C%22ISSN%22%3A%221558-2191%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fabstract%5C%2Fdocument%5C%2F8709774%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T18%3A13%3A54Z%22%7D%7D%5D%7D
Pramanik, S. et al. Deep Learning Driven Venue Recommender for Event-Based Social Networks. 2020
Risk Prevention
5447768
risk prevention
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22SLUEIIBE%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Kim%20et%20al.%22%2C%22parsedDate%22%3A%222022-06-24%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKim%2C%20J.-M.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fnhess.copernicus.org%5C%2Farticles%5C%2F22%5C%2F2131%5C%2F2022%5C%2F%26%23039%3B%26gt%3BStrategic%20framework%20for%20natural%20disaster%20risk%20mitigation%20using%20deep%20learning%20and%20cost-benefit%20analysis%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Strategic%20framework%20for%20natural%20disaster%20risk%20mitigation%20using%20deep%20learning%20and%20cost-benefit%20analysis%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ji-Myong%22%2C%22lastName%22%3A%22Kim%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sang-Guk%22%2C%22lastName%22%3A%22Yum%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hyunsoung%22%2C%22lastName%22%3A%22Park%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Junseo%22%2C%22lastName%22%3A%22Bae%22%7D%5D%2C%22abstractNote%22%3A%22Given%20trends%20in%20more%20frequent%20and%20severe%20natural%20disaster%20events%2C%20developing%20effective%20risk%20mitigation%20strategies%20is%20crucial%20to%20reduce%20negative%20economic%20impacts%2C%20due%20to%20the%20limited%20budget%20for%20rehabilitation.%20To%20address%20this%20need%2C%20this%20study%20aims%20to%20develop%20a%20strategic%20framework%20for%20natural%20disaster%20risk%20mitigation%2C%20highlighting%20two%20different%20strategic%20implementation%20processes%20%28SIPs%29.%20SIP-1%20is%20intended%20to%20improve%20the%20predictability%20of%20natural%20disaster-triggered%20financial%20losses%20using%20deep%20learning.%20To%20demonstrate%20SIP-1%2C%20SIP-1%20explores%20deep%20neural%20networks%20%28DNNs%29%20that%20learn%20storm%20and%20flood%20insurance%20loss%20ratios%20associated%20with%20selected%20major%20indicators%20and%20then%20develops%20an%20optimal%20DNN%20model.%20SIP-2%20underlines%20the%20risk%20mitigation%20strategy%20at%20the%20project%20level%2C%20by%20adopting%20a%20cost%5Cu2013benefit%20analysis%20method%20that%20quantifies%20the%20cost%20effectiveness%20of%20disaster%20prevention%20projects.%20In%20SIP-2%2C%20a%20case%20study%20of%20disaster%20risk%20reservoir%20projects%20in%20South%20Korea%20was%20adopted.%20The%20validated%20result%20of%20SIP-1%20confirmed%20that%20the%20predictability%20of%20the%20developed%20DNN%20is%20more%20accurate%20and%20reliable%20than%20a%20traditional%20parametric%20model%2C%20while%20SIP-2%20revealed%20that%20maintenance%20projects%20are%20economically%20more%20beneficial%20in%20the%20long%20term%20as%20the%20loss%20amount%20becomes%20smaller%20after%208%20years%2C%20coupled%20with%20the%20investment%20in%20the%20projects.%20The%20proposed%20framework%20is%20unique%20as%20it%20provides%20a%20combinational%20approach%20to%20mitigating%20economic%20damages%20caused%20by%20natural%20disasters%20at%20both%20financial%20loss%20and%20project%20levels.%20This%20study%20is%20its%20first%20kind%20and%20will%20help%20practitioners%20quantify%20the%20loss%20from%20natural%20disasters%2C%20while%20allowing%20them%20to%20evaluate%20the%20cost%20effectiveness%20of%20risk%20reduction%20projects%20through%20a%20holistic%20approach.%22%2C%22date%22%3A%222022-06-24%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%2210.5194%5C%2Fnhess-22-2131-2022%22%2C%22ISSN%22%3A%221561-8633%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fnhess.copernicus.org%5C%2Farticles%5C%2F22%5C%2F2131%5C%2F2022%5C%2F%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T18%3A21%3A42Z%22%7D%7D%2C%7B%22key%22%3A%229B5XFMD6%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Kang%20and%20Choo%22%2C%22parsedDate%22%3A%222016-06-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKang%2C%20B.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS2405959516300169%26%23039%3B%26gt%3BA%20deep-learning-based%20emergency%20alert%20system%26lt%3B%5C%2Fa%26gt%3B.%202016%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20deep-learning-based%20emergency%20alert%20system%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Byungseok%22%2C%22lastName%22%3A%22Kang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hyunseung%22%2C%22lastName%22%3A%22Choo%22%7D%5D%2C%22abstractNote%22%3A%22Emergency%20alert%20systems%20serve%20as%20a%20critical%20link%20in%20the%20chain%20of%20crisis%20communication%2C%20and%20they%20are%20essential%20to%20minimize%20loss%20during%20emergencies.%20Acts%20of%20terrorism%20and%20violence%2C%20chemical%20spills%2C%20amber%20alerts%2C%20nuclear%20facility%20problems%2C%20weather-related%20emergencies%2C%20flu%20pandemics%2C%20and%20other%20emergencies%20all%20require%20those%20responsible%20such%20as%20government%20officials%2C%20building%20managers%2C%20and%20university%20administrators%20to%20be%20able%20to%20quickly%20and%20reliably%20distribute%20emergency%20information%20to%20the%20public.%20This%20paper%20presents%20our%20design%20of%20a%20deep-learning-based%20emergency%20warning%20system.%20The%20proposed%20system%20is%20considered%20suitable%20for%20application%20in%20existing%20infrastructure%20such%20as%20closed-circuit%20television%20and%20other%20monitoring%20devices.%20The%20experimental%20results%20show%20that%20in%20most%20cases%2C%20our%20system%20immediately%20detects%20emergencies%20such%20as%20car%20accidents%20and%20natural%20disasters.%22%2C%22date%22%3A%222016-06-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.icte.2016.05.001%22%2C%22ISSN%22%3A%222405-9595%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS2405959516300169%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-06-26T09%3A13%3A36Z%22%7D%7D%5D%7D
Kang, B. et al. A deep-learning-based emergency alert system. 2016
Modeling and Simulations (Physical Geography)
5447768
physical geography
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%226XPRLF49%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Estacio%20et%20al.%22%2C%22parsedDate%22%3A%222024-12-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BEstacio%2C%20I.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1195103624000120%26%23039%3B%26gt%3BPredicting%20the%20future%20through%20observations%20of%20the%20past%3A%20Concretizing%20the%20role%20of%20Geosimulation%20for%20holistic%20geospatial%20knowledge%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Predicting%20the%20future%20through%20observations%20of%20the%20past%3A%20Concretizing%20the%20role%20of%20Geosimulation%20for%20holistic%20geospatial%20knowledge%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ian%22%2C%22lastName%22%3A%22Estacio%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chris%22%2C%22lastName%22%3A%22Lim%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kenichiro%22%2C%22lastName%22%3A%22Onitsuka%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Satoshi%22%2C%22lastName%22%3A%22Hoshino%22%7D%5D%2C%22abstractNote%22%3A%22Geomatics%20can%20be%20generally%20defined%20as%20the%20knowledge%20and%20ability%20of%20utilizing%20geospatial%20data%20for%20analyzing%20and%20forecasting%20the%20state%20of%20the%20environment%20to%20inform%20environmental%20management.%20However%2C%20current%20applications%20of%20Geomatics%20only%20span%20from%20data%20acquisition%20to%20spatial%20analysis%20and%20exclude%20the%20capabilities%20of%20Geosimulation.%20To%20concretize%20the%20role%20of%20Geosimulation%20in%20Geomatics%20for%20obtaining%20geospatial%20knowledge%2C%20we%20write%20this%20paper%20with%20two%20main%20objectives.%20First%2C%20we%20establish%20the%20Geomatics%20framework%2C%20a%20set%20of%20tasks%20utilizing%20geospatial%20data%20that%20aims%20to%20provide%20holistic%20geospatial%20knowledge%20of%20the%20environment.%20This%20set%20of%20tasks%20are%20specifically%20composed%20of%20data%20acquisition%2C%20spatial%20analysis%2C%20and%20Geosimulation.%20This%20proposed%20framework%20also%20brings%20forward%20our%20second%20objective%20which%20is%20to%20present%20Geomatics%20as%20an%20approach%20for%20holistically%20informing%20environmental%20management%20by%20predicting%20the%20future%20through%20observations%20of%20the%20past.%20To%20provide%20sample%20applications%20of%20the%20Geomatics%20framework%20for%20obtaining%20holistic%20geospatial%20knowledge%2C%20we%20provide%20three%20case%20studies%20of%20research%20projects%20that%20followed%20the%20Geomatics%20framework%20for%20informing%20environmental%20management%20actions.%20As%20Geomatics%20can%20play%20a%20major%20role%20in%20addressing%20the%20effects%20of%20climate%20change%2C%20we%20also%20presented%20a%20future%20template%20for%20the%20application%20of%20the%20Geomatics%20framework%20for%20mitigating%20and%20adapting%20to%20the%20effects%20of%20climate%20change.%20We%20anticipate%20three%20implications%20of%20adopting%20this%20Geomatics%20framework%3A%20the%20widening%20of%20the%20environmental%20application%20of%20Geomatics%2C%20the%20establishment%20of%20a%20methodological%20workflow%20for%20informing%20environmental%20management%2C%20and%20the%20enhancement%20of%20the%20collaboration%20between%20Geosimulation%20and%20other%20spatial%20science%20fields.%20We%20conclude%20the%20paper%20by%20advocating%20the%20adoption%20of%20this%20framework%20as%20we%20posit%20that%20this%20new%20perspective%20in%20Geomatics%20will%20also%20strengthen%20the%20teaching%20of%20the%20environmental%20applications%20of%20geospatial%20knowledge.%22%2C%22date%22%3A%222024-12-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.geomat.2024.100012%22%2C%22ISSN%22%3A%221195-1036%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1195103624000120%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-11-15T18%3A41%3A40Z%22%7D%7D%2C%7B%22key%22%3A%22AZFFXGYZ%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Roy%20et%20al.%22%2C%22parsedDate%22%3A%222022%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BRoy%2C%20A.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1111%5C%2F2041-210X.13853%26%23039%3B%26gt%3BUsing%20generative%20adversarial%20networks%20%28GAN%29%20to%20simulate%20central-place%20foraging%20trajectories%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Using%20generative%20adversarial%20networks%20%28GAN%29%20to%20simulate%20central-place%20foraging%20trajectories%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Am%5Cu00e9d%5Cu00e9e%22%2C%22lastName%22%3A%22Roy%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ronan%22%2C%22lastName%22%3A%22Fablet%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sophie%20Lanco%22%2C%22lastName%22%3A%22Bertrand%22%7D%5D%2C%22abstractNote%22%3A%22Miniature%20electronic%20devices%20have%20recently%20enabled%20ecologists%20to%20document%20relatively%20large%20amounts%20of%20animal%20trajectories.%20Modelling%20such%20trajectories%20may%20contribute%20to%20explaining%20the%20mechanisms%20underlying%20observed%20behaviours%20and%20to%20clarifying%20ecological%20processes%20at%20the%20scale%20of%20the%20population%20by%20simulating%20multiple%20trajectories.%20Existing%20approaches%20to%20animal%20movement%20modelling%20have%20mainly%20addressed%20the%20first%20objective%2C%20and%20are%20often%20limited%20when%20used%20for%20simulation%20purposes.%20Individual-based%20models%20generally%20rely%20on%20ad%20hoc%20formulation%20and%20their%20empirical%20parametrization%20lacks%20generability%2C%20while%20random%20walks%20based%20on%20mathematically%20sound%20statistical%20inference%20typically%20consist%20of%20first-order%20Markovian%20models%20calibrated%20at%20the%20local%20scale%20which%20may%20lead%20to%20overly%20simplistic%20description%20and%20simulation%20of%20animal%20trajectories.%20We%20investigate%20a%20recent%20deep%20learning%20tool%5Cu2014generative%20adversarial%20networks%20%28GAN%29%5Cu2014to%20simulate%20animal%20trajectories.%20GANs%20consist%20of%20a%20pair%20of%20deep%20neural%20networks%20that%20aim%20to%20capture%20the%20data%20distribution%20of%20some%20experimental%20dataset.%20They%20enable%20the%20generation%20of%20new%20instances%20of%20data%20that%20share%20statistical%20properties.%20This%20study%20aims%20at%20identifying%20relevant%20deep%20network%20architectures%20to%20simulate%20central-place%20foraging%20trajectories%2C%20as%20well%20as%20at%20evaluating%20GANs%20drawbacks%20and%20benefits%20over%20classical%20methods%2C%20such%20as%20state-switching%20hidden%20Markov%20models%20%28HMM%29.%20We%20demonstrate%20the%20outstanding%20ability%20of%20deep%20convolutional%20GANs%20to%20simulate%20and%20to%20capture%20medium-%20to%20large-scale%20properties%20of%20seabird%20foraging%20trajectories.%20GAN-derived%20synthetic%20trajectories%20reproduced%20the%20Fourier%20spectral%20density%20of%20observed%20trajectories%20better%20than%20those%20simulated%20using%20HMMs.%20However%2C%20unlike%20HMMs%2C%20GANs%20do%20not%20adequately%20capture%20local-scale%20descriptive%20statistics%2C%20such%20as%20step%20speed%20distributions.%20GANs%20provide%20a%20new%20likelihood-free%20approach%20to%20calibrate%20complex%20stochastic%20processes%20and%20thus%20open%20new%20research%20avenues%20for%20animal%20movement%20modelling.%20We%20discuss%20the%20potential%20uses%20of%20GANs%20in%20movement%20ecology%20and%20future%20developments%20to%20better%20capture%20local-scale%20features.%20In%20this%20context%2C%20embedding%20HMM-based%20priors%20in%20GAN%20schemes%20appears%20as%20a%20promising%20research%20direction.%22%2C%22date%22%3A%222022%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1111%5C%2F2041-210X.13853%22%2C%22ISSN%22%3A%222041-210X%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1111%5C%2F2041-210X.13853%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T19%3A44%3A55Z%22%7D%7D%5D%7D
Modeling and Simulations (Human Geography)
5447768
human geography
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%228GWRHBFL%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Alastal%20and%20Shaqfa%22%2C%22parsedDate%22%3A%222022-03-31%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BAlastal%2C%20A.I.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.scirp.org%5C%2Fjournal%5C%2Fpaperinformation%3Fpaperid%3D116308%26%23039%3B%26gt%3BGeoAI%20Technologies%20and%20Their%20Application%20Areas%20in%20Urban%20Planning%20and%20Development%3A%20Concepts%2C%20Opportunities%20and%20Challenges%20in%20Smart%20City%20%28Kuwait%2C%20Study%20Case%29%26lt%3B%5C%2Fa%26gt%3B.%202022%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22GeoAI%20Technologies%20and%20Their%20Application%20Areas%20in%20Urban%20Planning%20and%20Development%3A%20Concepts%2C%20Opportunities%20and%20Challenges%20in%20Smart%20City%20%28Kuwait%2C%20Study%20Case%29%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Abdelkhalek%20I.%22%2C%22lastName%22%3A%22Alastal%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ashraf%20Hassan%22%2C%22lastName%22%3A%22Shaqfa%22%7D%5D%2C%22abstractNote%22%3A%22Artificial%20intelligence%20has%20significantly%20altered%20many%20job%20workflows%2C%20hence%20expanding%20earlier%20notions%20of%20limitations%2C%20outcomes%2C%20size%2C%20and%20prices.%20GeoAI%20is%20a%20multidisciplinary%20field%20that%20encompasses%20computer%20science%2C%20engineering%2C%20statistics%2C%20and%20spatial%20science.%20Because%20this%20subject%20focuses%20on%20real-world%20issues%2C%20it%20has%20a%20significant%20impact%20on%20society%20and%20the%20economy.%20A%20broad%20context%20incorporating%20fundamental%20questions%20of%20theory%2C%20epistemology%2C%20and%20the%20scientific%20method%20is%20used%20to%20bring%20artificial%20intelligence%20%28Al%29%20and%20geography%20together.%20This%20connection%20has%20the%20potential%20to%20have%20far-reaching%20implications%20for%20the%20geographic%20study.%20GeoAI%2C%20or%20the%20combination%20of%20geography%20with%20artificial%20intelligence%2C%20offers%20unique%20solutions%20to%20a%20variety%20of%20smart%20city%20issues.%20This%20paper%20provides%20an%20overview%20of%20GeoAI%20technology%2C%20including%20the%20definition%20of%20GeoAI%20and%20the%20differences%20between%20GeoAI%20and%20traditional%20AI.%20Key%20steps%20to%20successful%20geographic%20data%20analysis%20include%20integrating%20AI%20with%20GIS%20and%20using%20GeoAI%20tools%20and%20technologies.%20Also%20shown%20are%20key%20areas%20of%20applications%20and%20models%20in%20GeoAI%2C%20likewise%20challenges%20to%20adopt%20GeoAI%20methods%20and%20technology%20as%20well%20as%20benefits.%20This%20article%20also%20included%20a%20case%20study%20on%20the%20use%20of%20GeoAI%20in%20Kuwait%2C%20as%20well%20as%20a%20number%20of%20recommendations.%22%2C%22date%22%3A%222022-03-31%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.4236%5C%2Fjdaip.2022.102007%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.scirp.org%5C%2Fjournal%5C%2Fpaperinformation%3Fpaperid%3D116308%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-11-16T17%3A40%3A39Z%22%7D%7D%2C%7B%22key%22%3A%22BSAT4JE5%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Boulila%20et%20al.%22%2C%22parsedDate%22%3A%222021-09-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBoulila%2C%20W.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1574954121001163%26%23039%3B%26gt%3BA%20novel%20CNN-LSTM-based%20approach%20to%20predict%20urban%20expansion%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20novel%20CNN-LSTM-based%20approach%20to%20predict%20urban%20expansion%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wadii%22%2C%22lastName%22%3A%22Boulila%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hamza%22%2C%22lastName%22%3A%22Ghandorh%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mehshan%20Ahmed%22%2C%22lastName%22%3A%22Khan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fawad%22%2C%22lastName%22%3A%22Ahmed%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jawad%22%2C%22lastName%22%3A%22Ahmad%22%7D%5D%2C%22abstractNote%22%3A%22Time-series%20remote%20sensing%20data%20offer%20a%20rich%20source%20of%20information%20that%20can%20be%20used%20in%20a%20wide%20range%20of%20applications%2C%20from%20monitoring%20changes%20in%20land%20cover%20to%20surveillance%20of%20crops%2C%20coastal%20changes%2C%20flood%20risk%20assessment%2C%20and%20urban%20sprawl.%20In%20this%20paper%2C%20time-series%20satellite%20images%20are%20used%20to%20predict%20urban%20expansion.%20As%20the%20ground%20truth%20is%20not%20available%20in%20time-series%20satellite%20images%2C%20an%20unsupervised%20image%20segmentation%20method%20based%20on%20deep%20learning%20is%20used%20to%20generate%20the%20ground%20truth%20for%20training%20and%20validation.%20The%20automated%20annotated%20images%20are%20then%20manually%20validated%20using%20Google%20Maps%20to%20generate%20the%20ground%20truth.%20The%20remaining%20data%20were%20then%20manually%20annotated.%20Prediction%20of%20urban%20expansion%20is%20achieved%20by%20using%20a%20ConvLSTM%20network%2C%20which%20can%20learn%20the%20global%20spatio-temporal%20information%20without%20shrinking%20the%20size%20of%20spatial%20feature%20maps.%20The%20ConvLSTM%20based%20model%20is%20applied%20on%20the%20time-series%20satellite%20images%20and%20the%20results%20of%20prediction%20are%20compared%20with%20Pix2pix%20and%20Dual%20GAN%20networks.%20In%20this%20paper%2C%20experimental%20results%20are%20conducted%20using%20several%20multi-date%20satellite%20images%20representing%20the%20three%20largest%20cities%20in%20Saudi%20Arabia%2C%20namely%3A%20Riyadh%2C%20Jeddah%2C%20and%20Dammam.%20The%20evaluation%20results%20show%20that%20the%20proposed%20ConvLSTM%20based%20model%20produced%20better%20prediction%20results%20in%20terms%20of%20Mean%20Square%20Error%2C%20Root%20Mean%20Square%20Error%2C%20Peak%20Signal%20to%20Noise%20Ratio%2C%20Structural%20Similarity%20Index%2C%20and%20overall%20classification%20accuracy%20as%20compared%20to%20Pix2pix%20and%20Dual%20GAN.%20Moreover%2C%20the%20training%20time%20of%20the%20proposed%20architecture%20is%20less%20than%20the%20Dual%20GAN%20architecture.%22%2C%22date%22%3A%222021-09-01%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.ecoinf.2021.101325%22%2C%22ISSN%22%3A%221574-9541%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS1574954121001163%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T19%3A44%3A20Z%22%7D%7D%2C%7B%22key%22%3A%22B5ALRG53%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wu%20et%20al.%22%2C%22parsedDate%22%3A%222021%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWu%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1029%5C%2F2021GL094737%26%23039%3B%26gt%3BDeep%20Learning-Based%20Super-Resolution%20Climate%20Simulator-Emulator%20Framework%20for%20Urban%20Heat%20Studies%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep%20Learning-Based%20Super-Resolution%20Climate%20Simulator-Emulator%20Framework%20for%20Urban%20Heat%20Studies%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yuankai%22%2C%22lastName%22%3A%22Wu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bernardo%22%2C%22lastName%22%3A%22Teufel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laxmi%22%2C%22lastName%22%3A%22Sushama%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Stephane%22%2C%22lastName%22%3A%22Belair%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lijun%22%2C%22lastName%22%3A%22Sun%22%7D%5D%2C%22abstractNote%22%3A%22This%20proof-of-concept%20study%20couples%20machine%20learning%20and%20physical%20modeling%20paradigms%20to%20develop%20a%20computationally%20efficient%20simulator-emulator%20framework%20for%20generating%20super-resolution%20%28%26lt%3B250%20m%29%20urban%20climate%20information%2C%20that%20is%20required%20by%20many%20sectors.%20To%20this%20end%2C%20a%20regional%20climate%20model%5C%2Fsimulator%20is%20applied%20over%20the%20city%20of%20Montreal%2C%20for%20the%20summers%20of%202019%20and%202020%2C%20at%202.5%20km%20%28LR%29%20and%20250%20m%20%28HR%29%20resolutions%2C%20which%20are%20used%20to%20train%20and%20validate%20the%20proposed%20super-resolution%20deep%20learning%20%28DL%29%20model%5C%2Femulator.%20The%20DL%20model%20uses%20an%20efficient%20sub-pixel%20convolution%20layer%20to%20generate%20HR%20information%20from%20LR%20data%2C%20with%20adversarial%20training%20applied%20to%20improve%20physical%20consistency.%20The%20DL%20model%20reduces%20temperature%20errors%20significantly%20over%20urbanized%20areas%20present%20in%20the%20LR%20simulation%2C%20while%20also%20demonstrating%20considerable%20skill%20in%20capturing%20the%20magnitude%20and%20location%20of%20heat%20stress%20indicators.%20These%20results%20portray%20the%20value%20of%20the%20innovative%20simulator-emulator%20framework%2C%20that%20can%20be%20extended%20to%20other%20seasons%5C%2Fperiods%2C%20variables%20and%20regions.%22%2C%22date%22%3A%222021%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1029%5C%2F2021GL094737%22%2C%22ISSN%22%3A%221944-8007%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fonlinelibrary.wiley.com%5C%2Fdoi%5C%2Fabs%5C%2F10.1029%5C%2F2021GL094737%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-05-02T19%3A44%3A28Z%22%7D%7D%5D%7D
Boulila, W. et al. A novel CNN-LSTM-based approach to predict urban expansion. 2021
Ethics
5447768
ethics
1
https://geoai.icaci.org/csl
50
date
desc
1
title
18
https://geoai.icaci.org/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22UUUF7YBC%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Kausika%20and%20van%20Altena%22%2C%22parsedDate%22%3A%222025-08%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKausika%2C%20B.B.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F14%5C%2F8%5C%2F313%26%23039%3B%26gt%3BGeoAI%20in%20Topographic%20Mapping%3A%20Navigating%20the%20Future%20of%20Opportunities%20and%20Risks%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22GeoAI%20in%20Topographic%20Mapping%3A%20Navigating%20the%20Future%20of%20Opportunities%20and%20Risks%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bala%20Bhavya%22%2C%22lastName%22%3A%22Kausika%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Vincent%22%2C%22lastName%22%3A%22van%20Altena%22%7D%5D%2C%22abstractNote%22%3A%22Geospatial%20Artificial%20Intelligence%20%28GeoAI%29%20has%20been%20advancing%20and%20altering%20geographic%20information%20systems%20and%20Earth%20observation%20by%20enhancing%20the%20computation%20and%20understanding%20capabilities%20of%20these%20systems.%20In%20this%20context%2C%20the%20application%20of%20GeoAI%20in%20topographic%20mapping%20presents%20a%20transformative%20opportunity%20for%20national%20mapping%20agencies%20worldwide.%20While%20GeoAI%20offers%20significant%20advantages%2C%20its%20adoption%20can%20also%20introduce%20new%20challenges%2C%20necessitating%20organization-wide%20transformations%20for%20sustainable%20implementation.%20Opportunities%20in%20the%20future%20of%20topographic%20mapping%20include%20improved%20data%20processing%20and%20real-time%20mapping%20capabilities.%20However%2C%20the%20adoption%20of%20GeoAI%20also%20brings%20forth%20various%20risks%2C%20including%20data%20privacy%20concerns%2C%20algorithmic%20biases%2C%20and%20the%20need%20for%20robust%20cybersecurity%20measures%2C%20which%20are%20pivotal%20to%20the%20national%20mapping%20organizations.%20Given%20the%20rapid%20technological%20advancements%20in%20AI%20and%20computing%2C%20and%20the%20challenges%20that%20national%20mapping%20agencies%20currently%20face%2C%20we%20discuss%20the%20potential%20opportunities%20and%20risks%20of%20GeoAI%20from%20a%20multi-perspective%20view.%20By%20examining%20global%20examples%20and%20trends%2C%20and%20synthesizing%20insights%20from%20current%20applications%20and%20theoretical%20frameworks%2C%20this%20paper%20aims%20to%20provide%20a%20comprehensive%20overview%20of%20GeoAI%5Cu2019s%20impact%20on%20topographic%20mapping%20within%20the%20context%20of%20national%20mapping%2C%20offering%20strategic%20recommendations%20for%20stakeholders%20to%20leverage%20opportunities%20while%20mitigating%20risks.%22%2C%22date%22%3A%222025%5C%2F8%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3390%5C%2Fijgi14080313%22%2C%22ISSN%22%3A%222220-9964%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2220-9964%5C%2F14%5C%2F8%5C%2F313%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-12T22%3A47%3A54Z%22%7D%7D%2C%7B%22key%22%3A%22ZWA8M3R7%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Edler%20et%20al.%22%2C%22parsedDate%22%3A%222025-06-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BEdler%2C%20D.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs42489-025-00186-0%26%23039%3B%26gt%3BFallibilism%20and%20Generative%20AI%20in%20Cartography%3A%20Some%20Fundamental%20Theoretical%20Thoughts%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Fallibilism%20and%20Generative%20AI%20in%20Cartography%3A%20Some%20Fundamental%20Theoretical%20Thoughts%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dennis%22%2C%22lastName%22%3A%22Edler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jule%22%2C%22lastName%22%3A%22Drews%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Karsten%22%2C%22lastName%22%3A%22Berr%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Olaf%22%2C%22lastName%22%3A%22K%5Cu00fchne%22%7D%5D%2C%22abstractNote%22%3A%22This%20article%20explores%20the%20significance%20of%20various%20forms%20of%20fallibilism%20in%20the%20context%20of%20generative%20artificial%20intelligence%20%28AI%29%20and%20its%20applications%20in%20cartography.%20Fallibilism%2C%20as%20an%20epistemological%20approach%2C%20emphasizes%20the%20fundamental%20fallibility%20of%20knowledge%20%28here%20particularly%20scientific%20knowledge%20and%20AI-generated%20knowledge%29%20and%20calls%20for%20critical%20reflection%20on%20its%20limits%20and%20uncertainties.%20Five%20variants%20of%20fallibilism%20%28epistemological%2C%20methodological%2C%20ontological%2C%20pragmatic%2C%20and%20neopragmatic%29%20are%20examined%20in%20this%20context.%20The%20epistemological%20approach%20emphasizes%20the%20provisional%20nature%20of%20knowledge%2C%20while%20the%20methodological%20approach%20focuses%20on%20the%20need%20for%20error-tolerant%20methods.%20Ontological%20fallibilism%20questions%20fundamental%20assumptions%20about%20reality%2C%20and%20pragmatic%20and%20neopragmatic%20fallibilism%20emphasize%20the%20practical%20utility%20of%20knowledge%20and%20iterative%20development.%20The%20neopragmatic%20approach%2C%20which%20integrates%20all%20other%20perspectives%2C%20offers%20a%20flexible%20and%20practice-oriented%20framework.%20This%20framework%20promotes%20the%20creation%20of%20useful%2C%20dynamic%2C%20and%20inclusive%20cartographic%20applications.%20The%20article%20discusses%20how%20generative%20AI%20can%20be%20utilized%20within%20the%20neopragmatic%20framework%20of%20fallibilism%20to%20constructively%20address%20uncertainties%20and%20develop%20socially%20relevant%20solutions%2C%20particularly%20in%20the%20realm%20of%20cartography.%22%2C%22date%22%3A%222025-06-01%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2Fs42489-025-00186-0%22%2C%22ISSN%22%3A%222524-4965%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs42489-025-00186-0%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-28T19%3A02%3A01Z%22%7D%7D%2C%7B%22key%22%3A%22E2W4H5HR%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Lin%20and%20Zhao%22%2C%22parsedDate%22%3A%222025-03-16%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLin%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F24694452.2024.2435920%26%23039%3B%26gt%3BPosthuman%20Cartography%3F%20Rethinking%20Artificial%20Intelligence%2C%20Cartographic%20Practices%2C%20and%20Reflexivity%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Posthuman%20Cartography%3F%20Rethinking%20Artificial%20Intelligence%2C%20Cartographic%20Practices%2C%20and%20Reflexivity%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yue%22%2C%22lastName%22%3A%22Lin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bo%22%2C%22lastName%22%3A%22Zhao%22%7D%5D%2C%22abstractNote%22%3A%22Artificial%20intelligence%20%28AI%29%20is%20catalyzing%20growing%20disruptions%20in%20contemporary%20cartography%20and%20beyond.%20Unlike%20previous%20mapping%20technologies%2C%20the%20current%20wave%20of%20AI%20enables%20producing%20maps%20without%20explicit%20programmed%20rules%2C%20which%20extends%20and%2C%20in%20some%20cases%2C%20surpasses%20human%20intelligence.%20This%20transformative%20capacity%20has%20the%20potential%20to%20reshape%20not%20only%20the%20practices%20of%20map-making%20but%20also%20the%20power%20structures%20of%20the%20actors%20involved.%20In%20this%20light%2C%20we%20propose%20posthuman%20cartography%20as%20a%20potential%20perspective%20to%20examine%20the%20emerging%20trend%20in%20cartography%20characterized%20by%20a%20codependency%20between%20human%20and%20machine%20intelligences.%20This%20theoretical%20perspective%20challenges%20traditional%20human-centric%20approaches%2C%20proposing%20instead%20to%20view%20mapping%20as%20a%20network%20of%20relations%20that%20include%20both%20human%20and%20nonhuman%20actors.%20It%20also%20highlights%20the%20importance%20of%20recognizing%20AI%20as%20significant%20actors%20in%20mapping%20praxes%2C%20as%20well%20as%20the%20need%20to%20acknowledge%20the%20shifting%20power%20structures%20influenced%20by%20AI.%20We%20further%20advocate%20for%20a%20reflexive%20approach%20that%20tackles%20the%20ethical%20challenges%20posed%20by%20AI%20and%20other%20technological%20disruptions%20in%20contemporary%20cartography.%22%2C%22date%22%3A%222025-03-16%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F24694452.2024.2435920%22%2C%22ISSN%22%3A%222469-4452%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F24694452.2024.2435920%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-08-13T21%3A35%3A59Z%22%7D%7D%2C%7B%22key%22%3A%22AY7MC6HA%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22K%5Cu00fchne%20and%20Edler%22%2C%22parsedDate%22%3A%222025-03-01%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BK%5Cu00fchne%2C%20O.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs42489-024-00184-8%26%23039%3B%26gt%3BReconstructing%20the%20Map%3A%20A%20Neopragmatist%20Perspective%20on%20Cartography%20in%20the%20Context%20of%20Artificial%20Intelligence%20%28AI%29%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Reconstructing%20the%20Map%3A%20A%20Neopragmatist%20Perspective%20on%20Cartography%20in%20the%20Context%20of%20Artificial%20Intelligence%20%28AI%29%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Olaf%22%2C%22lastName%22%3A%22K%5Cu00fchne%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dennis%22%2C%22lastName%22%3A%22Edler%22%7D%5D%2C%22abstractNote%22%3A%22The%20present%20article%20explores%20possibilities%20for%20a%20new%20theoretical%20framework%20in%20cartography%20based%20on%20a%20neopragmatist%20approach.%20Starting%20with%20an%20outline%20of%20Traditional%20and%20Critical%20Cartography%2C%20a%20neopragmatist%20perspective%20is%20developed%20that%20promotes%20inclusivity%20and%20problem-solving%20orientation.%20This%20approach%20draws%20on%20the%20analytical%20framework%20of%20Karl%20Popper%5Cu2019s%20Three%20Worlds%20Theory%2C%20specifically%20the%20Theory%20of%20Three%20Spaces.%20Neopragmatism%20emphasizes%20the%20production%20of%20useful%20knowledge%20over%20absolute%20truth%20and%20acknowledges%20the%20contingency%20and%20flexible%20interpretability%20of%20cartographic%20representations.%20In%20this%20context%2C%20Artificial%20Intelligence%20%28AI%29%20is%20described%20as%20a%20dynamic%20tool%20for%20problem-solving%2C%20capable%20of%20supporting%20continuous%20learning%20and%20application-oriented%20adaptation.%20By%20employing%20AI%20within%20a%20neopragmatist%20framework%20in%20cartography%2C%20new%20possibilities%20emerge%20for%20integrating%20and%20utilizing%20diverse%20social%20perspectives%20and%20%28geospatial%29%20data.%20This%20approach%20enables%20an%20expansion%20of%20the%20theoretical%20and%20practical%20applicability%20of%20cartography.%20Finally%2C%20the%20article%20illustrates%20that%20the%20deconstruction%5Cu2014building%20on%20J.%20B.%20Harley%5Cu2019s%20influential%20article%20Deconstructing%20the%20Map%20%281989%29%5Cu2014and%20reconstruction%20of%20maps%20must%20exist%20in%20a%20recursive%20relationship%20to%20enable%20a%20context-%20and%20solution-oriented%20cartography.%22%2C%22date%22%3A%222025-03-01%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2Fs42489-024-00184-8%22%2C%22ISSN%22%3A%222524-4965%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs42489-024-00184-8%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-08-13T14%3A03%3A31Z%22%7D%7D%2C%7B%22key%22%3A%227NUIM88F%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Shi%20et%20al.%22%2C%22parsedDate%22%3A%222025%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BShi%2C%20M.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2025.2479796%26%23039%3B%26gt%3BGeography%20for%20AI%20sustainability%20and%20sustainability%20for%20GeoAI%26lt%3B%5C%2Fa%26gt%3B.%202025%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Geography%20for%20AI%20sustainability%20and%20sustainability%20for%20GeoAI%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Meilin%22%2C%22lastName%22%3A%22Shi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Krzysztof%22%2C%22lastName%22%3A%22Janowicz%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Judith%22%2C%22lastName%22%3A%22Verstegen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kitty%22%2C%22lastName%22%3A%22Currier%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nina%22%2C%22lastName%22%3A%22Wiedemann%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mai%20%2CGengchen%22%2C%22lastName%22%3A%22%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Majic%20%2CIvan%22%2C%22lastName%22%3A%22%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Liu%20%2CZilong%22%2C%22lastName%22%3A%22%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rui%22%2C%22lastName%22%3A%22and%20Zhu%22%7D%5D%2C%22abstractNote%22%3A%22Recent%20years%20have%20witnessed%20a%20boom%20in%20the%20development%20of%20multimodal%20large-scale%20generative%20AI%20models.%20These%20computationally%20intensive%20AI%20models%2C%20such%20as%20GPT-4%2C%20and%20their%20associated%20data%20centers%20have%20undergone%20increasing%20scrutiny%20in%20terms%20of%20their%20energy%20consumption%20and%20carbon%20emissions.%20As%20awareness%20of%20the%20energy%20costs%20and%20carbon%20footprints%20of%20AI%20models%20grows%2C%20attention%20has%20broadened%20to%20include%20other%20sustainability-related%20aspects%20such%20as%20their%20water%20consumption%2C%20transparency%2C%20and%20further%20environmental%20and%20social%20implications.%20In%20this%20work%2C%20we%20examine%20existing%20tools%2C%20frameworks%2C%20and%20evaluation%20metrics%2C%20complementing%20the%20ongoing%20discussions%20regarding%20AI%5Cu2019s%20environmental%20sustainability%20with%20a%20geographic%20perspective.%20This%20work%2C%20on%20the%20one%20hand%2C%20contributes%20to%20a%20geographically%20aware%20sustainability%20evaluation%20of%20current%20AI%20models.%20On%20the%20other%20hand%2C%20it%20examines%20the%20unique%20characteristics%20and%20challenges%20of%20GeoAI%20models%2C%20hoping%20to%20engage%20the%20GeoAI%20community%20in%20the%20sustainability%20discussion.%20Moving%20forward%2C%20we%20outline%20future%20directions%20on%20systematic%20reporting%20and%20geographically%20aware%20assessment.%20We%20then%20propose%20potential%20solutions%2C%20such%20as%20the%20adoption%20of%20Retrieval-Augmented%20Generation%20%28RAG%29%20models.%20Ultimately%2C%20we%20encourage%20future%20GeoAI%20research%20to%20acknowledge%20and%20address%20their%20environmental%20and%20social%20impact%2C%20thereby%20guiding%20GeoAI%20toward%20a%20more%20transparent%2C%20responsible%2C%20and%20sustainable%20future.%22%2C%22date%22%3A%222025%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2025.2479796%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2025.2479796%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-08-05T22%3A25%3A51Z%22%7D%7D%2C%7B%22key%22%3A%22BND59HKF%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Marasinghe%20et%20al.%22%2C%22parsedDate%22%3A%222024-06-26%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BMarasinghe%2C%20R.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs41651-024-00184-2%26%23039%3B%26gt%3BTowards%20Responsible%20Urban%20Geospatial%20AI%3A%20Insights%20From%20the%20White%20and%20Grey%20Literatures%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Towards%20Responsible%20Urban%20Geospatial%20AI%3A%20Insights%20From%20the%20White%20and%20Grey%20Literatures%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Raveena%22%2C%22lastName%22%3A%22Marasinghe%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tan%22%2C%22lastName%22%3A%22Yigitcanlar%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Severine%22%2C%22lastName%22%3A%22Mayere%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tracy%22%2C%22lastName%22%3A%22Washington%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mark%22%2C%22lastName%22%3A%22Limb%22%7D%5D%2C%22abstractNote%22%3A%22Artificial%20intelligence%20%28AI%29%20has%20increasingly%20been%20integrated%20into%20various%20domains%2C%20significantly%20impacting%20geospatial%20applications.%20Machine%20learning%20%28ML%29%20and%20computer%20vision%20%28CV%29%20are%20critical%20in%20urban%20decision-making.%20However%2C%20urban%20AI%20implementation%20faces%20unique%20challenges.%20Academic%20literature%20on%20responsible%20AI%20largely%20focuses%20on%20general%20principles%2C%20with%20limited%20emphasis%20on%20the%20geospatial%20domain.%20This%20important%20gap%20in%20scholarly%20work%20could%20hinder%20effective%20AI%20integration%20in%20urban%20geospatial%20applications.%20Our%20study%20employs%20a%20multi-method%20approach%2C%20including%20a%20systematic%20academic%20literature%20review%2C%20word%20frequency%20analysis%20and%20insights%20from%20grey%20literature%2C%20to%20examine%20potential%20challenges%20and%20propose%20strategies%20for%20effective%20geospatial%20AI%20%28GeoAI%29%20integration.%20We%20identify%20a%20range%20of%20responsible%20practices%20relevant%20to%20the%20complexities%20of%20using%20AI%20in%20urban%20geospatial%20planning%20and%20its%20effective%20implementation.%20The%20review%20provides%20a%20comprehensive%20and%20actionable%20framework%20for%20responsible%20AI%20adoption%20in%20the%20geospatial%20domain%2C%20offering%20a%20roadmap%20for%20urban%20researchers%20and%20practitioners.%20It%20highlights%20ways%20to%20optimise%20AI%20benefits%20while%20minimising%20potential%20negative%20consequences%2C%20contributing%20to%20urban%20sustainability%20and%20equity.%22%2C%22date%22%3A%222024-06-26%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1007%5C%2Fs41651-024-00184-2%22%2C%22ISSN%22%3A%222509-8829%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs41651-024-00184-2%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-11-15T18%3A44%3A13Z%22%7D%7D%2C%7B%22key%22%3A%226A6ZJC4D%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Kang%20et%20al.%22%2C%22parsedDate%22%3A%222024-01-16%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKang%2C%20Y.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F15230406.2023.2295943%26%23039%3B%26gt%3BArtificial%20intelligence%20studies%20in%20cartography%3A%20a%20review%20and%20synthesis%20of%20methods%2C%20applications%2C%20and%20ethics%26lt%3B%5C%2Fa%26gt%3B.%202024%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Artificial%20intelligence%20studies%20in%20cartography%3A%20a%20review%20and%20synthesis%20of%20methods%2C%20applications%2C%20and%20ethics%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yuhao%22%2C%22lastName%22%3A%22Kang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Song%22%2C%22lastName%22%3A%22Gao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robert%20E.%22%2C%22lastName%22%3A%22Roth%22%7D%5D%2C%22abstractNote%22%3A%22The%20past%20decade%20has%20witnessed%20the%20rapid%20development%20of%20geospatial%20artificial%20intelligence%20%28GeoAI%29%20primarily%20due%20to%20the%20ground-breaking%20achievements%20in%20deep%20learning%20and%20machine%20learning.%20A%20growing%20number%20of%20scholars%20from%20cartography%20have%20demonstrated%20successfully%20that%20GeoAI%20can%20accelerate%20previously%20complex%20cartographic%20design%20tasks%20and%20even%20enable%20cartographic%20creativity%20in%20new%20ways.%20Despite%20the%20promise%20of%20GeoAI%2C%20researchers%20and%20practitioners%20have%20growing%20concerns%20about%20the%20ethical%20issues%20of%20GeoAI%20for%20cartography.%20In%20this%20paper%2C%20we%20conducted%20a%20systematic%20content%20analysis%20and%20narrative%20synthesis%20of%20research%20studies%20integrating%20GeoAI%20and%20cartography%20to%20summarize%20current%20research%20and%20development%20trends%20regarding%20the%20usage%20of%20GeoAI%20for%20cartographic%20design.%20Based%20on%20this%20review%20and%20synthesis%2C%20we%20first%20identify%20dimensions%20of%20GeoAI%20methods%20for%20cartography%20such%20as%20data%20sources%2C%20data%20formats%2C%20map%20evaluations%2C%20and%20six%20contemporary%20GeoAI%20models%2C%20each%20of%20which%20serves%20a%20variety%20of%20cartographic%20tasks.%20These%20models%20include%20decision%20trees%2C%20knowledge%20graph%20and%20semantic%20web%20technologies%2C%20deep%20convolutional%20neural%20networks%2C%20generative%20adversarial%20networks%2C%20graph%20neural%20networks%2C%20and%20reinforcement%20learning.%20Further%2C%20we%20summarize%20seven%20cartographic%20design%20applications%20where%20GeoAI%20have%20been%20effectively%20employed%3A%20generalization%2C%20symbolization%2C%20typography%2C%20map%20reading%2C%20map%20interpretation%2C%20map%20analysis%2C%20and%20map%20production.%20We%20also%20raise%20five%20potential%20ethical%20challenges%20that%20need%20to%20be%20addressed%20in%20the%20integration%20of%20GeoAI%20for%20cartography%3A%20commodification%2C%20responsibility%2C%20privacy%2C%20bias%2C%20and%20%28together%29%20transparency%2C%20explainability%2C%20and%20provenance.%20We%20conclude%20by%20identifying%20four%20potential%20research%20directions%20for%20future%20cartographic%20research%20with%20GeoAI%3A%20GeoAI-enabled%20active%20cartographic%20symbolism%2C%20human-in-the-loop%20GeoAI%20for%20cartography%2C%20GeoAI-based%20mapping-as-a-service%2C%20and%20generative%20GeoAI%20for%20cartography.%22%2C%22date%22%3A%222024-01-16%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2023.2295943%22%2C%22ISSN%22%3A%221523-0406%2C%201545-0465%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.tandfonline.com%5C%2Fdoi%5C%2Ffull%5C%2F10.1080%5C%2F15230406.2023.2295943%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-07-31T16%3A30%3A55Z%22%7D%7D%2C%7B%22key%22%3A%2265DFP74U%22%2C%22library%22%3A%7B%22id%22%3A5447768%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhao%20et%20al.%22%2C%22parsedDate%22%3A%222021-07-04%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZhao%2C%20B.%20et%20al.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2021.1910075%26%23039%3B%26gt%3BDeep%20fake%20geography%3F%20When%20geospatial%20data%20encounter%20Artificial%20Intelligence%26lt%3B%5C%2Fa%26gt%3B.%202021%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Deep%20fake%20geography%3F%20When%20geospatial%20data%20encounter%20Artificial%20Intelligence%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bo%22%2C%22lastName%22%3A%22Zhao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shaozeng%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chunxue%22%2C%22lastName%22%3A%22Xu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yifan%22%2C%22lastName%22%3A%22Sun%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Chengbin%22%2C%22lastName%22%3A%22Deng%22%7D%5D%2C%22abstractNote%22%3A%22The%20developing%20convergence%20of%20Artificial%20Intelligence%20and%20GIScience%20has%20raised%20a%20concern%20on%20the%20emergence%20of%20deep%20fake%20geography%20and%20its%20potentials%20in%20transforming%20human%20perception%20of%20the%20geographic%20world.%20Situating%20fake%20geography%20under%20the%20context%20of%20modern%20cartography%20and%20GIScience%2C%20this%20paper%20presents%20an%20empirical%20study%20to%20dissect%20the%20algorithmic%20mechanism%20of%20falsifying%20satellite%20images%20with%20non-existent%20landscape%20features.%20To%20demonstrate%20our%20pioneering%20attempt%20at%20deep%20fake%20detection%2C%20a%20robust%20approach%20is%20then%20proposed%20and%20evaluated.%20Our%20proactive%20study%20warns%20of%20the%20emergence%20and%20proliferation%20of%20deep%20fakes%20in%20geography%20just%20as%20%5Cu201clies%5Cu201d%20in%20maps.%20We%20suggest%20timely%20detections%20of%20deep%20fakes%20in%20geospatial%20data%20and%20proper%20coping%20strategies%20when%20necessary.%20More%20importantly%2C%20it%20is%20encouraged%20to%20cultivate%20a%20critical%20geospatial%20data%20literacy%20and%20thus%20to%20understand%20the%20multi-faceted%20impacts%20of%20deep%20fake%20geography%20on%20individuals%20and%20human%20society.%22%2C%22date%22%3A%222021-07-04%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1080%5C%2F15230406.2021.1910075%22%2C%22ISSN%22%3A%221523-0406%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1080%5C%2F15230406.2021.1910075%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-09-19T19%3A00%3A34Z%22%7D%7D%5D%7D
Kausika, B.B. et al. GeoAI in Topographic Mapping: Navigating the Future of Opportunities and Risks. 2025
Edler, D. et al. Fallibilism and Generative AI in Cartography: Some Fundamental Theoretical Thoughts. 2025
Shi, M. et al. Geography for AI sustainability and sustainability for GeoAI. 2025
Marasinghe, R. et al. Towards Responsible Urban Geospatial AI: Insights From the White and Grey Literatures. 2024
Zhao, B. et al. Deep fake geography? When geospatial data encounter Artificial Intelligence. 2021
Note: So far, only research works applying deep learning architectures have been considered but not any traditional machine learning algorithms.
Remember, this is just a starting point. Explore these resources and search for specific topics within GeoAI. If you like any publication to be added, please fill in the form below: