Mark Smith, CEO of Geospatial Corporation, spoke this week with GISCafe Voice about the challenges of mapping the underground, which includes mapping underwater. The company’s goal is to create an underground “map of the world,” by doing it “one pipeline at a time.” This is a sensible approach to a project that may seem a bit like trying to eat an elephant (start with the toes!). With the help of sensors and Geospatial’s cloud-based GIS platform, GeoUnderground, it looks like the goal is highly attainable.
What are specific challenges to mapping underground utilities?
The most obvious challenge is that the pipelines and conduits are underground or underwater and that makes the selection of the data acquisition methodology very important. I like to say that the difference between locating and mapping is pretty straight forward. Locators attempt to “clear” an area for a specific reason, such as in preparation for a construction project. At Geospatial Corporation, we approach a project in a very “holistic” manner. We know there is no “silver bullet” that will allow us to accurately map every type of buried infrastructure within a facility, right of way or municipality. We know that we need to use many types of data acquisition technologies to obtain a complete “picture” or “map” of the underground. In addition, getting this vast amount of data properly into a GIS platform from the field, often with numerous techs collecting below and above ground over large areas is in itself a trick. For this we have developed GeoUnderground, our proprietary cloud-based GIS platform built on Google Maps. GeoUnderground provides an economical, SaaS based, powerful yet very simple to use GIS Platform accessible from any phone. Our goal is to have every data acquisition tool seamlessly integrate into GeoUnderground.
What solutions do you provide to achieve goals?
At Geospatial we consider our data acquisition technologies to be simply “sensors on a platform”. The platform could be designed to run inside of a pipeline or conduit and have various types of gyroscopic or electromagnetic sensors. These technologies are extremely accurate under most conditions and allow us to accurately map in x,y&z pipelines and conduits as small as 1.5 inches in diameter to 20 feet in diameter. These technologies are often used on projects for telecom, (Such as AT&T, Comcast & Verizon). This is also applicable for sewers, gas lines and numerous other types of infrastructure. We have developed a method of combining technologies to geo-reference the video collected inside a pipeline during periodic inspections. This allows the pipeline owner to locate any defects within the pipeline, providing an exact xy&z location of the defect. This also allows the video data to be stored and viewed, edited and shared on GeoUnderground. We are constantly looking for new types of data acquisition and data management technologies to be added to GeoUnderground. To this end, we are creating strategic alliances with numerous sensor companies.
Are you creating a map of the world’s underground infrastructure and if so, when do you think that will be completed and how will it be maintained?
Yes, our slogan is that we are creating a map of the world’ underground, one pipeline at a time. In reality we are aggregating data of behalf of our clients that is slowly, but surely creating a map of the underground. As more and more of our clients realize the benefits of mapping and knowing the location of their critical assets, the mitigation of risk and the ROI obtainable from sophisticated analysis, they will accelerate the mapping of their underground and above ground assets. More and more infrastructure stakeholders are beginning to plan to map their entire facility.
How do Blockchain technologies figure in?
It’s a massive undertaking to attempt to map the underground. Just as we are constantly finding new sensor applications, we are also exploring new software applications utilizing Blockchain, machine learning and artificial intelligence.
How do you renovate or replace utility structures that are underwater?
Geospatial doesn’t repair or replace pipelines, but we do have several ways to map pipelines underwater involving either our gyroscopic technologies and our electromagnetic technologies. We have successfully mapped a telecom conduit under the East River in New York City, also the Harlem River in NYC, The Savanna River in Georgia, the Inner Coastal Waterway in Charleston, along with many other rivers and lakes across the USA.
What do you think will be the result of mapping the outdated infrastructure, and how might it be maintained or retrofit using your data?
A few years back, no one would have guessed that all of the above ground infrastructure would have been digitally mapped, from the air, from un-manned drones or from the streets. The underground infrastructure is the last unmapped frontier. We can only begin to speculate the many uses and benefits derived from having an accurate 3D map of the underground. Smart City initiatives, increasing Federal and state requirements for gas & oil pipelines, an abundance of new sensors creating the Internet of Things and the ability to run risk analysis on critical pipelines all require management to know the exact position and depth of our critical infrastructure.
Yuneec announced the availability of Pix4Dcapture on its H520 ST16S ground station controller. Pix4D is a premier software application that creates professional, georeferenced maps and models from drone imagery, giving users the ability to map flight plans and set customized mapping parameters.
Recently, ArcGIS Pro specialists at the company Mapillary answered a few questions for GISCafe Voice:
How long has Mapillary been in existence? What is its primary focus?
Mapillary is a street-level imagery platform powered by collaboration and computer vision. The company was founded in 2013.
Mapillary combines images from any device into a visualization of the world to generate data for improving maps, developing cities, and progressing the automotive industry. Mapillary’s tools enable anyone to collect, share, and use street-level images. Computer vision technology reconstructs locations in 3D and recognizes objects from the images to generate map data at scale. Today, people and organizations all over the world have contributed over 250 million images toward Mapillary’s mission of helping people understand the world’s places through images and making this data available.
What does the new Mapillary for ArcGIS Pro beta contain – what are its primary features?
The Beta focuses on bringing Mapillary public imagery into ArcGIS Pro. In short, it lets customers:
view Mapillary imagery as visual reference,
view, edit, and create features in street-level imagery,
compare imagery to see how places change over time.
What was in the previous release and why did you make certain feature upgrades?
The latest version, available in Public Beta, contains the same general functionality as earlier releases. However, we’ve made considerable performance improvements.
Earlier releases of Mapillary for ArcGIS Pro faced a challenge when rendering the large number of features required to show our imagery coverage. Our previous method of serializing vector tiles into a feature layer came coupled with a decrease in performance. For the Public Beta, we’ve notably increased performance and reduced system overhead by serving vector tiles directly into ArcGIS Pro. This means a faster and more efficient experience using Mapillary Imagery from the add-in.
Is a specific type of camera used?
The imagery on Mapillary is contributed collaboratively by Mapillary users all over the world: individuals, companies, non-profits, and governments. The platform is device-agnostic so every contributor uses a camera setup that suits them best, from Mapillary mobile apps to action cameras to professional 360-degree cameras.
What kind of geotagging of photos is used?
The Mapillary mobile apps (including integrations with some common action and 360-degree cameras) save location information into the image EXIF during capture and is then uploaded to Mapillary directly via the app. In addition, any geotagged images can be uploaded with help of our web uploader or command line tools. It’s also possible to upload image files together with a .gpx file that’s used for geotagging during the upload process. (more…)
Both large full size satellites as well as small satellites are now being used for various purposes around the globe. In addition, constellations of satellites are being developed for specific purposes, such as internet satellites. We also include here maritime surveillance that relies on Satellite Automatic Identification System (AIS) payload.
This week’s GIS news includes a wide variety of announcements, from IBM’s PAIRS Geoscope to redistricting data from Caliper, of the 2018 edition of Congressional Districts.
There is a great need for services that facilitate working with large amounts of geospatial data from disparate sources. IBM addresses that need with their announcement of PAIRS Geoscope, a new experimental cloud-based service that makes it easier for developers to work with large amounts of geospatial data from across a wide variety of sources. The service handles ingesting, integrating and managing the data and allows developers to focus on their queries.
Aerial mapping company Bluesky of Leicestershire, UK has completed a research project backed by the UK government’s innovation agency, Innovate UK, to develop the use of mobile phones for capturing accurate 3D spatial information.
The nine-month research project focused on the use of standard smart phone technology to capture and calibrate video footage, and then convert it to 3D information. Designed for electricity Distribution Network Operators (DNO) and other organizations with a distributed asset base, the low-cost measurement tool can provide an accurate record of the feature’s location and its environment. Accurate measurements of essential infrastructure, such as overhead power lines and other utility facilities, could then be extracted using specially developed algorithms and workflows.
Trimble announced the release of the Trimble® MX9 mobile mapping solution, completing Trimble’s mobile mapping portfolio. A next-generation mobile mapping system, the Trimble MX9 combines a vehicle-mounted mobile LIDAR system, multi-camera imaging and field software designed for efficient, precise and high-volume data capture for a broad range of mobile mapping applications such as road surveys, topographic mapping, 3D-modeling and asset management.
Trimble MX9 Back Perspective
According to company materials, the Trimble MX9 is characterized by its ability to capture dense point cloud data along with 360 degree immersive georeferenced imagery using an industry-leading spherical camera, GNSS/INS technology and dual-head laser scanning sensors. The system’s lightweight design makes it easy to install and setup on a variety of vehicles. Spatial data can be captured at highway speeds from inside the vehicle for safe operation in transportation corridors. The intuitive, browser-based field software, accessible via most tablets or any notebook, enables operators to quickly establish and conduct data acquisition missions, monitor the status of the system as well as assess the quality of the acquired data in real time.
Christian Hoffmann, Market Manager, Mobile Mapping Solutions, Trimble Geospatial spoke with GISCafe Voice about the recent announcement:
GISCafe Voice: Has Trimble had a mobile mapping solution before the MX9?
The Trimble Mobile Mapping portfolio has been in the market for more than a decade with popular products like the MX2 and MX7, which we currently sell. The MX9 completes Trimble’s mobile mapping portfolio, adding a high-end system that is designed for efficient acquisition of survey-grade dense point cloud data and imagery. The lightweight design and a focus on easy, tablet-based operation lowers the learning curve and contributes to maximize ROI.
Trimble MX9 Top View
GISCafe Voice: Is there a limit to how much point cloud data the MX9 can gather?
The system collects up to 2 million points per second plus various imagery, which is one of the highest data rates in the market. 2 x 2 TByte SSD drives allow recording a lot of data, typically for 7-8 hours of constant data recording. Details are dependent on the project specifications. Customers can use additional sets of disks in order to maximize acquisition capacity. (more…)
Recently, Scottish Geographic Information Systems (GIS) company thinkWhere announced the launch of a new cloud-based platform for GIS and geographic data, theMapCloud. theMapCloud allows maps, open data and business records to be accessed anytime, and anywhere, through a web-connected computer or mobile device. Using standard web browsers, users can view, retrieve and share maps, geographic data and other open datasets and, as well as providing a platform for GIS and other web applications, theMapCloud can be used for a host of data services and Software as a Service (SaaS) applications.
This questionnaire is aimed toward those who do research and development on traditional artificial satellites and “smallsats,” as well as those customers of satellites, and companies providing third party solutions for them. Since companies of larger satellites produce small satellites as well, larger satellites, their features and their pros and cons are included in this questionnaire.
McMurdo Station Iceberg, Antarctica, NASA, taken from a small sat.
2017 tested the resilience of geospatial technologies with many natural disasters. In reviewing the year, we take a look at products, services and technologies that moved the industry forward and responded eloquently to geospatial need.
Disaster response, weather tracking, and predictive weather analysis drove a great deal of development and put to the test those technologies in place for just such eventualities.
Other areas of interest include new developments in sensors, location and Big Data, small sats, mobile mapping and 3D models for indoor mapping, autonomous driving, and building smart cities.
Under the Weather
In an interview with URISA’s GISCorps founder Shoreh Elhami and URISA executive director Wendy Nelson, they offer a broader understanding of what GISCorps is about and how it can help with natural disasters.
Is ArcGIS Online able to generate a setting for help, i.e., website, app, or whatever resource might be needed, during a natural disaster event? And how soon might that be available to the public?
ArcGIS Online (AGO) can be used to create a variety of story maps. Those story maps as well as any AGO based web apps can be embedded in any website and very quickly. A good example of that is the web app that our volunteers embedded in Fort Bend County’s website on road closures. Another example is a story map that was built by NAPSG shortly after the disaster, our volunteers also assisted with that project.
How has the GIS relief effort for Hurricane Harvey been handled by GISCorps so far and what are the plans going forward?
26 of our volunteers have been working on mapping road closures in Fort Bend County. The information originates from County’s website, emails, and also tweets. The Web app has been helpful to residents, first responders, and the county staff. The project was lead by two of our volunteers who worked with GISCorps Core Committee members on managing the project. The Center for Disease Control (CDC) also requested the assistance of a GIS programmer to pull data from the FEMA site on an ongoing basis. The GISCorps Recruitment team selected a volunteer within 30 hours and put the volunteer in contact with CDC. We also asked our volunteers to contribute to NAPSG story map. We are currently on stand-by and ready to assist with other projects at this time, be it for Harvey or Irma.
Hurricane Harvey weather map
How do the projects for Hurricane Harvey and Katrina differ or are they the same? What are the priorities?
Quite different. For Katrina, we deployed 30 volunteers onsite, the option to assist remotely didn’t even exist. Volunteers packed up their bags, laptops, and other essentials and head over to the affected areas within a couple of days. For Harvey (and many other disasters of the past few years), we haven’t had to send anyone anywhere. Volunteers work from their home or offices and have been effective in different ways. For Katrina, the priority was to help with the rescue efforts at first (locate people under stress and report to the coast guard) and then, the recovery phase began where volunteers made 100’s of maps and conducted lots of analysis). For Harvey, crowd sourcing and information from social media have become major sources of information for developing interactive maps to first responders and other affected population.]
Tom Jeffrey, CEO of CoreLogic, a leading global property information, analytics and data-enabled solutions provider, talked about their analysis for the flooding and storm surge as a result of Hurricane Harvey. (more…)