Wrong or “Wronger?” : When Surveyed Data Misses the GIS Data – Is the Survey “Wrong”?
April 21st, 2014 by Gavin Schrock
“In any moment of decision, the best thing you can do is the right thing, the next best thing is the wrong thing, and the worst thing you can do is nothing” – Theodore Roosevelt
Yes, I know that there is no such word as “wronger”. But I also know that there is no such thing as an infallible survey, and most certainly that there is no such thing as an infallible GIS. Disclaimer: I am a licensed surveyor, but I have also been involved in geodetic framework development and data acquisition for GIS for three decades. I have also seen instances of a disturbing reversal in momentum that had been leading us towards realizing the dream of seamless enterprise geospatial data – push-back from both sides of the survey/engineering-GIS “fence”. Mostly this is due to mutual misunderstanding; and primarily the mistaken notion that there needs to be a “fence” in the first place. The real harm this “fence-ism” does is in wasting data-rich and accurate resources.
An example: Recently, a minor firestorm was lit in the GIS division of a sizable enterprise in the Pacific Northwest when a CAD drawing, from the engineering/surveying group in the same enterprise, was overlaid on the enterprise GIS. There were discrepancies of half a foot on many features. This caused a panic. The consensus was that it would be harmful for the surveyed data to be included as a theme in the enterprise GIS because end users would be confronted with anomalies, conflicting features – that the survey was “wrong” as it did not match the homogeneous GIS. Apart from disturbing notion that features actually collected by field measurements might be ignored as possible valuable resource, there is an undertone of “the convoy only goes as fast as the slowest ship” defeatism that runs counter to all of the movement towards dynamic GIS, GeoDesign, and 3D GIS. Let’s break down some elements of such “discrepancies”; they are not just spatial – there are questions of data origin and completeness…
The Dynamic Earth
If a GIS was established in say 1991, and the reference themes of control, cadastral, and corridor were initially tied to an established and published reference framework (e.g. the National Geodetic Survey (NGS) “HARN”, or High Accuracy Reference Network of that era) – that was a geodetic “snapshot” in time. We live in a “4D” world; with the 4th dimension being time. The earth moves in some parts of this country and world at a different rate than others; plate tectonics can be several centimeters a year. As an example, take the velocity of the Continuously Operating Reference Station “SEAT”, at the University of Washington, and monitored for several decades by the NGS. The velocities are as follows:
NAD_83 (2011) VELOCITY
Transformed from IGS08 velocity in Aug 2011.
VX = 0.0082 m/yr northward = 0.0046 m/yr
VY = 0.0009 m/yr eastward = 0.0064 m/yr
VZ = 0.0012 m/yr upward = -0.0026 m/yr
You can look up the velocity of any NGS reference station in your area (of the US) at: geodesy.noaa.gov
The last column tells the tale: 4.6mm northward, and 6.4mm eastward per year. From 1991 to 2014 that adds up to 10.58cm northward, 14.72cm eastward, or 0.347 feet northward, 0.483 feet eastward – about 4” x 6”. Then that half a foot discrepancy looks a bit moot.
But not all of the changes in published values for geodetic control reference marks (monuments and reference stations) are due purely to velocity. Agencies like the NGS and many state departments of transportation, as well as some local entities update the published values to account for velocity, but also as the capabilities to use GPS and other resources to refine the data and remove the positional “noise” of past reference frameworks have improved markedly in recent years. The NGS has republished on a number of “epochs” since 1991. But the reference frameworks for all but a very few GIS have been updated; it can be cumbersome and expensive to do so.
All is not lost; it is very easy to pack the data into the DeLorean and go back in time to 1991. The shift can be easily calculated, and applied before the new data is put into the enterprise GIS, even on-the-fly. But easier (though disturbing) is simply rejecting the data. Now let’s look at some other areas of discrepancies, and see if they can (surprisingly) make an even stronger case for considering adding data from surveys.
The Scaled, the Extrapolated, and the Inherited
How many themes in an enterprise GIS originated in field measurements, and how many were “derived” and applied by any number of other methods (I’m going to avoid the term “rubber-sheeting”). Some utility features might include valves, and maintenance holes scaled in from some legacy paper (or even cloth) maps, say at a scale of 1”=40’ (or other scale). At that scale the line on the map might be over a foot wide, and then how did the line get on there in the first place? Did the legacy map serve more as a system schematic than an accurate engineering plan? Even as a schematic it would have been very valuable and useful on so many levels, but as spatial discrepancy was cited as the reason to reject survey data, let’s continue to explore the spatial aspect. Another example is extrapolation to place a feature in a theme. For instance the water meters might have been mapped via dynamic segmentation – bisecting the parcel frontage along a right of way, and placing the symbol some standard offset. The actual location may be on one end of the parcel frontage – 1’, 5’, 10’, 20’ or more away. Such methods were invaluable at the time – it was better to have an affordable and expedient “schematic” theme, and to be able to do so much with it. Folks get this – accuracy at a high price was not a high priority, and not necessary for many uses. But the harm comes in forgetting the humble spatial origins of the features, and how they come to become to be viewed as “righter” than they actually are – and sometimes misused accordingly
This is not to get on a high horse about location and measurement – if there was no desire to improve the enterprise GIS spatially then none of this matters, but this is not just about location – data validation is another valuable byproduct of field surveying or field asset inventory programs. An ambitious and successful asset inventory of drainage features for the same enterprise has found many features to be not only entirely in the wrong place, but many features were simply missing from the enterprise GIS. The asset inventory crews are using equipment capable of precisions of around 10cm, and even with the GPS antenna on a backpack (and not necessarily plumbed up over a feature), the realistic expectation of resultant accuracies of around one foot in horizontal is still a marked improvement. Much better than most legacy locations in the drainage features themes developed decades ago via scaling and extrapolation.
Consider the Field Survey
Typically done for design engineering or real property purposes, a field survey is performed to meet a specified high precision criteria – think centimeters. Sure, one could quibble about errors in some surveys, and surveyors will be the first to admit this can happen, but they are typically extremely consistent as a deliverable. A surveyor does not have the luxury of slapping a disclaimer on their map; they often need to stamp it, be licensed to do so, and may be called to defend their measurements in a court of law. One can put a lot of faith in the accuracy of the survey, and as it resulted from an actual visit to the field, confidence that the feature on the map actually existed in that location on the date of the survey. Engineering surveys to support say the design of new waterlines typically would collect all surface features within the subject right-of-way- and often resolution of the right-of-way lines themselves. Yes, some features will be removed/altered by the construction, but there is a wealth of potential validated and accurate features that will not – can we afford to waste good data? Surely the fine minds who develop and maintain enterprise geospatial data can find a way to “mines” these resources.
One argument for excluding survey data in the GIS is that it is somehow “disruptively accurate” – or that updating positions will require the warping of connected features/systems. Then, it is feared, you get a hodgepodge of accuracies (is this any different than what exists now?). And more important; weren’t we all supposed to be stating the accuracy, date of origin, and method of acquisition in the metadata for each theme and/or individual feature?
If there is not enough time or budget to update features in the GIS from the digital survey maps/drawings, then a simple solution can make that data available – a “shadow” theme. The surveyed data can be added as a separate theme, and if an end user is interested they can always click the “survey says!” theme. They might see that there is a previously missed feature that a recent engineering survey found (far more common that you’d think), or that a feature was put into the GIS in the wrong location. What action this prompts might range from selectively updating the errant feature in the respective production theme, or just deciding that it does not affect the particular end use at that time. It is good to be able to have that option. I have not heard any compelling reason why an end user should have their hands tied when there is good data available that has already been paid for, and costs little or nothing to view or add.
The Two Ellipsoids and Other Sources of Discrepancy
A related note: a common error in method added several feet of error to some legacy datasets. With improvements in location field technologies, some folks are only now discovering this – the two ellipsoids. I have often heard folks trying to resolve a shift of over a meter for some features, even after epoch and precision of data collection method are accounted for. Using GPS to do field asset inventory has been popular and affordable for decades. The ubiquitous hand-held or backpack resource-grade single-frequency GPS unit has been a godsend for asset inventory, but sometimes the default geodetic system was set at WGS84, and not whatever the current reference system of the GIS is. In North America, this is typically NAD83; the two systems are nearly-identical ellipsoids, and they only differ by origin at their centers by about 2 meters. So at any given place in the US the difference on the earth’s surface could be several feet. This transformation is done on the fly for you in the software, but only if you choose the right system. For example in Seattle the shift is a little less than 4 feet in horizontal and about a foot in vertical. This is becoming less of a hazard in modern gear, but I still run into this, not only in legacy features in a GIS where a crew might have had the wrong setting, but even today when the fundamentals are not understood or considered – datum, units, epoch, method, training… misconceptions…
The Glass Floor
Accurate elevation data is another sore spot. While airborne lidar has raised the quality of elevation data in GIS across the board, specific elevations and inverts, for example on drainage features used for hydrologic modeling require much higher precision. If field observations were used to get what the lidar could not, and this was done with say GPS, you have to remember that the vertical precision of resource grade, single frequency GPS is 2-3 times worse than that of the horizontal precision. Surveyors know the limitations of their own high precision GPS (RTK and networked RTK) and know that for certain precisions they will need to use a digital level.
The glass floor for resource grade field observations is that only a certain threshold of precision, for not only elevation, but in horizontal can be achieved. Popular mapping units are limited (sometimes by design) to 10cm precision. Much in the same way as folks got used to the idea that affordable mapping gear back in the day was only capable of +/- 1 meter, they sometimes start to believe that enterprise data can only be as accurate as 1m or 10cm, that it is not possible to be any more accurate, or that there could never be a benefit to being more accurate. The legacy of the Glass Floor may certainly hurt any efforts to move towards 3D GIS.
There is a missed opportunity any time someone goes out in the field to do asset inventory with resource-grade equipment. The largest cost of field observations (over time) is labor, and with a minor investment in more capable equipment, the same labor cost expended per feature collected could be yielding higher precision; especially in the vertical. Better yet, have folks who are trained and experienced in working within higher precisions supervise or QC such work (required by law in a few states). Sure, it is argued that most users do not need such accuracy, but as we move more towards 3D GIS and GeoDesign, there are those that will.
Misplaced Fears Hinder Progress
People are trusted to use enterprise data presently, even enterprise data that is far less homogenous, accurate, and complete than we realize (or would like to admit). It seems a little hard to believe that enterprises are investing in accurate field surveys, but willingly let that valuable resource go to waste after a single use.
GIS has truly been a force in changing our world for the better. Enterprises can manage assets and plan for the future with much more confidence than ever before. But GIS can continue to grow and become dynamic. What was revolutionary in say 1991 (around the time when a lot of enterprise GIS’s were just getting started) can now become bogged down by its own legacy. The next GIS revolution could very well be adding that 4th “D” and making the GIS dynamic both spatially, but also in the ability (or willingness) to add valuable and rich resources from beyond the legacy “fences”.
Tags: Gavin Schrock, GIS, Survey