Generally state agencies,
some regional agencies, and for a fee, a plethora of data vendors.
The first step is aggregation,
which is the process of taking many locally developed parcel data sets and standardizing
each to a common format and in some rare cases providing quality control and/or
spatially reconciling at the boundaries.
A national standard for parcel attributes exists, (http://nationalcad.org/CadStandards/CadStand.html)
but as with locally produced data, each state has its own standards to meet its state business needs. A recent review of
state parcel standards found over 20 states had developed state parcel
publication standards with many common attributes but no commonality of field
names, types, or lengths. In all states reviewed the state aggregated data did have data definitions and was easier to
understand and interpret than individual producer data sets.
The tools used to build
aggregated data sets range from brute force to Safe Software’s FME, Esri’s
Community Parcel tools, and customized state specific tools. Most states are either using or moving to web
based processing for aggregation.
Update frequency is
typically annually, some are twice a year, and a few are daily or continuous
updates.
Some of the nuances and challenges
for data aggregation are described in this article (http://www.esri.com/esri-news/arcnews/winter16articles/making-local-parcel-data-open-at-state-national-levels).
Distribution has typically
been zip file download and a web based viewer.
Files may be a single statewide file but often is individual files for
each data provider, such as each town or each county. A noticeable trend for aggregated and
distributed parcel data is the use of feature services. Many national parcel
data in federal agencies require a data download to incorporate information into
agency systems, but even federal agency applications are increasingly using
feature services.
Paul Ramsey presented an
intriguing twist to data aggregation and distribution in 2015. (http://s3.cleverelephant.ca.s3.amazonaws.com/2015-ccog.pdf). Mr. Ramsey discusses relevancy in terms of
frequency of use. If data are not used
it is less relevant than data that are used.
Let’s go with the parcel data are important and has many uses, and the
parcel data must be relevant. It must be
available to be used and recognized as a useable source to be relevant.
The Moment of Opportunity,
as described by Mr. Ramsey, is that small window when data (parcel data) can be
provided in a way that developers can easily harvest and embed it in
applications that can be seen and used by many on mobile devices. An interesting implication is that data needs
to be distributed in a way that the data can be used and accessed by
developers, rather than focusing on end user consumption.
This is an interesting
perspective and important to consider.
As Mr. Ramsey states “it just means that governments need to accept the way that
the technology ecosystem is going to want to consume their data, and change
their behavior to fit. The first step is to recommit to the idea of data as a
public good. If this data (parcel data) is critical infrastructure, as we
believe it to be, making it available to all members of civil society, without
restriction, is a basic requirement. …
Commit to simplicity in distribution. Follow the lead of NASA and publish raw
data, with computer readable manifests, with stable URLs, close to the point of
consumption on public cloud infrastructure”
Who
distributes parcel data? Generally data aggregators, but we should all keep an
eye on distributing our data in ways that will keep it relevant.
This is an interesting internet article on realtors use of parcel data holdings. The Multiple Listing Service (MLS) and Zillow are two of the parcel holdings discussed. http://www.huffingtonpost.com/entry/why-you-should-avoid-zillow-at-all-costs_us_57acba77e4b0e7935e046302
ReplyDelete