bypass the 100 objects limit ?

Avatar
  • updated
  • Under review

Hi all,

first, i'd like to thank you very much for your hard work!

i use curl to fetch https://osm-boundaries.com/Download/Submit.... and it works great!


Is there there anyway to bypass the limit of the 100 osmIds ?


Avatar
Magnus


Doesn't seem like I got an email about the other topic. I'll answer that one with the union-related questions.

Avatar
fabrice régnier

Hello @Magnus,

thank you for replying.

>So my question is, why should the limit be higher?

The answer refers directly to https://osmboundaries.userecho.com/en/communities/1/topics/19-download-http-options-is-it-possible-to-do-an-union-of-all-osmids
On the wambacher site, i've often used the union=true parameter so that i can get only one polygone instead of 100.

Actually, if union is not possible, i'd say, there's no need of higher limit. But if union is possible during the curl access, so i'd be nice to bypass 100 ->... hummmm 150 ?  :-p

regards,


fabrice.

Avatar
Magnus
  • Under review

Currently there isn't. But there isn't any good reason for the exact number of the limit either. When implementing the site we just needed a limit and did set a fairly low limit where we knew it wouldn't cause issues. After that we haven't been hitting the limit ourselves, and not heard any "complaints" either, so we have forgotten about it.

We have another site ourselves that fetches data from osm-boundaries.com, but that other site always splits it up in batches anyway, to reduce the importers memory usage.

A lower number also helps us with cache hits, though it's very unlikely to get a cache hit from someone else anyway, but that was one of the thoughts behind it originally.


There is however one issue with increasing it. The API currently uses GET only. The standard max length for a GET url is 2048 bytes. The IDs including the separator (comma) are up to 10 bytes each. One hundred IDs is then ~1000 bytes, plus other parameters, the domain and so forth. It's likely that a generated url with 200 IDs won't work because of this.

With this is mind, there isn't a reasonable way to increase the limit as long as we generate GET urls, which we prefer since it's more user friendly. With curl we could do POST, but then there wouldn't be a link to copy and paste into a browser for example. With the above math we can probably increase the limit safely to around 180, the question is if there is a good reason to. So my question is, why should the limit be higher? Why not make multiple requests?