The OSM Carto standard style, and its German countarpart, have been upgraded to v4.15.0.
The first public version of the HTTP API is now live, allowing you to submit render request from within your own applications.
Also new, shorter, host names now exist to access the service, instead of https://maposmatic.osm-baustelle.de/ you can now also use https://print.get-map.org/ and https://api.get-map.org/ , all of these being aliases for the same MapOSMatic instance.
I pushed some changes live today, there were a few gliches during the transition, but all things should be back to working state now.
OpenStreetMap Carto, Waymarked, and OpenTopoMap styles have been updated. Belgian and Swiss styles have been added.
I have updated the frontend code a bit, by finally upgrading to Bootstrap v3.x and by improving navigation between form parts of the “Create Map” form.
Something has gone wrong with the regular updates of the routes database somehow, so I have to do a re-import of that data from scratch.
While this re-import is onging the WayMarkedTrails overlays will be disabled.
None of the other map styles and overlays will be affected though, so most of the service will continue to be available.
Today the MapOSMatic service on this server served its twenty-thousands map request since it started a bit more than two years ago on May 3rd, 2016.
At the current rate of incoming requests the next 20K will take less than a year, or maybe even half a year only if the average amount of requests continues to grow 🙂
I updated toe CartoOSM and CartoOsmBW styles to version 4.11.0.
See the original release announcement below:
Last weekends Ubuntu upgrade to 18.04 LTS went mostly well, the Mapnik Python Binding problem from 17.10 has been fully fixed etc.
What didn’t go so well though was the database upgrade from PostgreSQL 9.6 / PostGIS 2.3 to 10 / 2.4.
irst of First of all the process for upgrading PostgreSQL and PostGIS to a new version at the same time is not as trivial as it could be, and requires some file name faking hacks before running the actual pg_upgrade:
Once I found that information the actual pg_upgrade ran fine, and so did the extension upgrade for PostGIS, too.
Map rendering also looked fine, but then when I tried to reactivate import of minutely diff jobs that should only take secondes, or minutes at best, just stalled, and eventually made one postgresql process run out of memory after allocating over 60GB of RAM over some 20 minutes.
In the end we came to the conclusion that something must already have gone wrong on the previous full planet import, somehow not causing problems on the older versions but triggering somethng strange on the more recent ones.
I dind not put much more effort into debugging this at that point, and to prevent possible spread of bad data decided to cut my losses and start a new full import right away without pre-announcing a downtime notice a week ahead of time.
Fortunately that import went well, and so does import of minutely diffs now. The system has since caught up and is less than 15 minutes behind on averaege.
I also have a gut feeling that rendering jobs are processed a little faster now, but unfortunately didn’t properly preserve timing results from test runs yet.