By John L Kelley, CEO and CoFounder, GeoSapient, Inc.
GIS and AI integration.
GeoSapient explores the intricacies of geocomputing data, processes, and applications, going beyond the rise of space and airborne resources.
GIS Applications Transformed by Disruptive Generative AI Agents
GIS and Artificial Intelligence (AI) together revolutionize spatial analysis and decision-making. Let’s begin with a brief overview of AI Agents.
Autonomous AI software, AI agents, execute tasks, make decisions, and supply insights. Data (at rest and streaming) allows these agents to learn, reason, and adapt to changing environments. There are two categories of AI Agents. Processing vast amounts of text and structured data is the providence of Large Language Models (LLMs) which can power sophisticated analysis and predictions. Lightweight and optimized for specific tasks, Small Language Models (SLMs) are perfect for localized geospatial queries or particular feature extractions.
Agents will enable autonomous feature extraction from satellite imagery, accurately identifying roads, buildings, and water bodies. SLMs complement this by supporting targeted tasks like specific urban trend predictions or localized disaster risk modeling. Their combined effort allows for (near) real-time data-driven actions through real-time spatial decision-making.
By Seb Lessware, CTO, 1Spatial
Automation, that’s what it’s all about, really. That’s what the industrial revolution and the information age have been driven by: Doing things faster or automatically or in greater quantities than when done ‘by hand’.
That’s why organisations strive for digitalisation, AI, sensors, ‘digital twins’, data hubs: Those aren’t useful outcomes in themselves but they can enable ‘the useful stuff’ to be done automatically, faster, more efficiently or more safely – and that’s especially important when government budgets are being squeezed but with the expectation of the same or higher output.
The prospect of more automation is the reason why there’s such hype around AI, not because it gives any insights into the nature of intelligence (hint – it doesn’t) but because it can help with automating things that are more ambiguous and have been harder to automate. That ambiguity though is also what makes people wary of it when those decisions and outputs are hard to scrutinise and validate and are sometimes confidently incorrect.