3 comments

  • teruakohatu2 hours ago
    The author writes a significant amount about their workstation, and how cleaning up the geometry can be helpful when accessing it remotely over S3.<p>The dataset is a 8,000 row spreadsheet.<p>My advice when working with such a small dataset is not to overthink it.
    • jandrewrogers44 minutes ago
      Real-world geometry is a mess and an endless headache. You can write brute-force tooling to grind through it and auto-repair issues it can classify across the myriad formats but it isn’t computationally cheap and you’ll still find “wtf” cases that you have to investigate manually. I’ve worked with official government GIS data sets where 1-5% of all geometry was defective in some way. You have to check it, the percentage of data sets with no defects is much smaller than you’d hope.<p>8,000 rows is small but the typical processing isn’t fast. Optimizing it has limited ROI. I use a custom Python library I wrote for this kind of work, which makes it a bit slow, but you constantly run across new types of inexplicable geometry issues so the ability to rapidly write custom routines is paramount, which Python excels at.<p>GIS data is computationally expensive to process even beyond its obvious properties.
    • marklit1 hour ago
      There are ~15 GB of SAR imagery at the bottom being rendered as is from GeoTIFF files. On my 2020 MBP rendering that amount of data in QGIS would lag without building mosaics and tiles.<p>The Parquet pattern I&#x27;m promoting makes working across a wide variety of datasets much easier. Not every dataset is huge but being in Parquet makes it much easier to analyse across a wide variety of tooling.<p>In the web world, you might only have a handful of datasets that your systems produce so you can pick the format and schemes ahead of time. In the GIS world, you are forever sourcing new datasets from strangers. There are 80+ vector GIS formats supported in GDAL. Getting more people to publish to Parquet first removes a lot of ETL tasks for everyone else down the line.
    • 3eb7988a16632 hours ago
      He posts that same preface on all of his blog posts. Seemingly more pertinent when it is one of the 1TB datasets he pulls in from time to time.
  • wodenokoto2 hours ago
    I think the most interesting part was the last map, which the practicalities of constructing was completely glossed over.<p>It was nice seeing how these stats can be calculated in sql, but this analysis would be beat by a few pivot tables in excel.<p>Excel can even draw a map to go along (although not as pretty)
  • kreelman4 hours ago
    Neat. Thanks!