The project I am following has changed requirements and now they want to do WMTS requests instead of WMS requests. With WMS all is working fine, including the TIME parameter in the GetMap request (store: netCDF file).
I cannot replicate the same requests in WMTS because apparently TIME parameter is not enabled. I installed WMTS multidimensional plugin and I can see the times in DescribeDomains request but still the TIME is not working in the GetTile request. Can anybody help here?
Related
My current setup for the project I'm working on is:
Next.js
Wordpress backend with GraphQL plugin enabled
They live on two different servers
I would like to be able to make a request from a Next.js page that proxy via an api-route to the Wordpress backend. I want the GraphQL query to be passed along and I would like to be able to modify the request (for example add header, set a cookie etc) before it reaches the Wordpress backend.
I first tried to achieve this using this module: https://github.com/http-party/node-http-proxy and using the .web() request. It almost worked except I got back a response from Wordpress that I wasn't able to decode (tried with Buffer etc, but no success).
So my current way to do this is to make an axios-request from my api route and pass along the req.body in that request, and that setup works.
However, is this way to proxy OK or should I try to make it work with node-http-proxy? Don't know about what possible benefits there are.
Thank you
if you use Axios you will make an extra request when you retrieve data from the source. This will decrease performance. On the other side if you use proxy you will forward the incoming request and this way you will have improved performance.
I have setup TomCat and THREDDS server (loaded war file) and attempted to serve up some *.nc files using via WMS protocol.
I can request the file but all I seem to get back is a black image.
I had something similar in geoserver but I was able to update the styles layer and setup ranges so that various colours were applied.
I have tried editing the 'wmsConfig.xml'and alter options such as the 'defaultColorScaleRange' but it doesn't seem to have the desired effect.
I have read the documentation a few times but I may be missing something , has anyone overcome this problem ? Any help would be great.
Cheers
Update 1
So as suggest below using the built in viewer I can see the image and this is what I would like to get by requesting via WMS.
Using built in viewer
address : http://10.19.38.63:8080/thredds/godiva2/godiva2.html?server=http://10.19.38.63:8080/thredds/wms/testAll/testData.nc#
Requesting initial attempt
http://10.19.38.63:8080/thredds/wms/testAll/testData.nc?service=WMS&version=1.3.0&request=GetMap&CRS=EPSG:4326&width=150&height=150&bbox=-10097025.688358642,-12875664.540581377,20037508.342789244,313086.06785608194&LAYERS=precipitation&format=image/png&STYLES=boxfill/red
Which returns just the black square :(
I will carry on and look at the WMS url used by th eopenlayers example, maybe thats the key... ill continue to update my questions as my journey begins :).
Update 2
Managed to work out that the SRS being passed in was incorrect and needed altering.
now next stage is to work out how to request a time series over a batch of NCDF files.....
Update 3
Managed to work out a way to automate requesting WMS services and with the aid of this great plugin for leaflet maps I have the desired output.
https://github.com/socib/Leaflet.TimeDimension
Basically call the WMS endpoint with getmap that I require building up the url relevant for the file I need to request.
The next step for me is looking at styled for the returned raster at this point seems like some Java code modification, but at least my initial problems have gone. phew!
Update 4
Gone away and tried to rebuild the Java on a project ncWMS which I found was standalone but now incorporated into THREDDS. Still having no joy with transparency raster created from NETCDF.
Looking at THREDDS code a bit more after I also tried changing palletes that didnt seem to work , issue raised
https://github.com/Unidata/thredds/issues/631
You haven't shared the full url of your THREDDS request, but 10.19.38.63/thredds/wms/.... is the service url for the WMS GetCapabilities file. Which is an XML file describing the WMS service. That is not the THREDDS url for viewing the WMS via ncWMS. You need to scroll down the page to the Viewers: section and choose the Godiva2 (browser-based) link.
I'm trying to setup a facebook share on https://donate.mozilla.org/en-US/thunderbird/share/
The og:url points to just /thunderbird which is the url I would want shared. Best I can tell the og tags are all there.
When I try to update the data on https://developers.facebook.com/tools/debug/og/object/
When I fetch new scrape information I get one of two errors. Initially, it'll take a long time then respond with a Curl Error : OPERATION_TIMEOUTED Operation timed out after 10000 milliseconds with {some number less than 10000} bytes received then subsequent fetch attempts respond with Curl Error : PARTIAL_FILE transfer closed with 17071 bytes remaining to read
We're using AWS Cloudfront and nodejs with hapijs
It responds with a 206 partial content, which, should be fine. The og tags are all in the beginning of the file.
I found this: docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/RangeGETs.html
There it says a range request is used to get the file in chunks, not to get just the part of the file and give up. So maybe that's causing unexpected behavior. Maybe cloudfront is sending it back in chunks, and facebook stops listening after the first response? I dunno. Just trying to find a theory that fits the facts.
We already have a working share for donate.mozilla.org/en-US/share/ but that might be old data from when we were not using hapijs and instead using expressjs which I don't think was supporting range requests and would instead return a 200.
I'm mostly a front end dev, so a lot of this is out of my comfort zone but I have already learned a lot :)
Edit: I also want to point out we use Heroku for hosting, and if I setup a test with just heroku and without cloudfront: donate.mofostaging.net/en-US/thunderbird/ it fetches the tags successfully. So I suspect it's a bug when facebook and hapijs interact with cloudfront.
I have encountered a strange behaviour with express routes. I want to enter an ID via HTML-Form and fetch the result via ajav (jquery) to display the entry. All was working fine, till i have to expand the ID from numbers to strings (with slashes).
I edited all functions and calls. I check the strign with a reg ex and want to fetch the request with a modified route (express). but here comes the problem. i get it working under windows but it is failing on linux. Perhaps the problem is caused by the invrastructure, because the node.js app is located behind an reverse proxy apache2 to tunnel the service to public (with domain & cert).
what ever. perhaps somebody can help me set this thing up and get it running.
app.get(/^\/byId\/(.+)/, getSourceById);
not using req.params[0] in the called function. on the test server (windows) it is working even with the old route
app.get('/byId/:id', getSourceById);
because the html form does request %2F not /. How ever, both ways should work to fetch the request. But both aren't working for me. did i miss something?
i'm thankful for any help!
Found the answer of my question. It was indeed the reverse proxy who was blocking the request.
Simular problem: http://www.gossamer-threads.com/lists/apache/users/314562
How to solve the behaviour:
http://httpd.apache.org/docs/2.0/mod/core.html#allowencodedslashes
Default forbidden because of security issues. If you need it, use it carefully.
I've recently started using Restangular for making cross domain requests to a RESTful service, and so far everything works great.
But with IE10 when a make a GET request only for the first time it gets data from the Server, and for subsequent calls it does not hit the server, and returns probably cached data. I need to get the data refreshed from the Server. I tried setting defaultHttpFields cache to false, but no luck. Please help!
Thanks,
Lakshmi
I'm the creator of Restangular.
Could you please post an example? If you didn't set the cache to true in defaultHttpfields, Restangular shouldn't cache this at all.
Have you chcked if the requests are going out in the Network tab of the developers console? Does it work in other browsers? Check in restangular Library for RestangularResource to see if it's doing $http call.
Hope it helps!
I just hit this one too. Seems that IE10 is particularly keen on caching results from RESTful calls.
One workaround I used was to just provide some unique value as a parameter to each request and then IE10 seems happy not to cache it. I used the current timestamp in ms since I've seen jQuery use similar workarounds in the past.
var postsApi = Restangular.all("posts");
$scope.allPosts = postsApi.getList({ nocache : new Date().getTime() });
Works for now.