The first thing that comes to mind to test the device’s precision (while I don’t really know how to check for the accuracy) by plug it in, leave it stationary, and collect the readings for a good while
I got two GPS of roughly the same capabilities:
- The BU-353
- The Columbus V800
I first got the BU-353, but it seemed finicky: it would not give a position reading unless it had more than 10 satellites in view, and once it decided to give position, it seemed jittery. The Columbus gives a reading even it there’s only a few satellites in view but “warns” us to treat the reading with low confidence by tagging the reading as V (void) in the $GPRMC message.
Grabbing data from the BU-353 over 72h yields:
Where we see noise and drift. The data hasn’t been filtered or edited.
Grabbing data from the Columbus V800:
Here again, we see comparable artifacts, with the Columbus V800 somewhat less dispersed. Overlapping the two graphs, we see that the Columbus V800 is less noisy:
Looking at how the reading varies over time—the GPSes were laying against the case of the desktop computer, therefore as stationary as can be—is also interesting. Computing the difference and plotting them:
We see that the variation is not isotropic: it varies more in longitude than latitude. This jitter can certainly be used to enhance the reading, I am not sure how just now. Of course, the best would be to have the raw timing data from the GPS satellites and solve an over-complete equation system to find the most likely position and estimate the error. Unfortunately I do not have access to that data from the GPS: the $GPGSV is just enough to display the satellites on a screen, but way too coarse to get extra precision.