WMS layers rendering slow in Open layers 3? - azure

I have Geoserver 2.11.2,PostgreSQL 9.5,open layers 3 and Tomcat 8 all are installed on Ubuntu 16.4 Azure cloud virtual machine.I also enabled GeoWebCache but still WMS layer rendering speed is slow(15 to 16 seconds).Please find this .Is there any idea to improve more speed than the current speed of web-tool,Thanks.

Broadly, it sounds like something is misconfigured. There are some excellent resources in the GeoServer docs (http://docs.geoserver.org/stable/en/user/production/) about running in production. From GeoSolutions, there is some training materials (http://geoserver.geo-solutions.it/edu/en/enterprise/index.html) and talks (https://www.slideshare.net/geosolutions/geoserver-in-production-we-do-it-here-is-how-foss4g-2016) which address common techniques for data prep, JVM options, and other considerations which may help some.
As a particular call-out, I'd strongly suggest Marlin (https://github.com/bourgesl/marlin-renderer/wiki/How-to-use). Its use in GeoServer can help immensely with concurrent rendering (http://www.geo-solutions.it/blog/developerss-corner-achieving-extreme-geoserver-scalability-with-the-new-marlin-vector-rasterizer/).
It may be worth making sure that PostGIS is installed and that your data has a spatial index. Tuning PostGIS is a separate topic.
Once the data is prepped and indexed and Marlin is up and running, it may be worth seeding the GWC cache. With that, your application would just be serving pre-baked tiles for coarse zoom levels and that should be snappier.

Looks like you have a lot of layers turned on in your map. Just zooming in once triggered a total of 700 individual tile requests, most of them to your GeoServer. I don't think your main problem is your GeoServer (although tweaking it using the other answers suggestions is always a good idea), I think your main problem is simply throughput.
Most browsers have a limit (when using http 1.1) of how many simultaneous requests can be sent to the same domain, once you hit that limit, all other requests are queued until the previous ones are done. I think that's your problem, your server is dealing with the requests as quickly as it can, but there are so many that it simply cannot serve them at the speed you are expecting.
I would strongly recommend you look at reducing the number of layers you have loaded by default, or implement some kind of zoom restriction so that certain layers turn off at different zoom levels. You could even think about combining a number of the layers into one and perhaps using GeoServers CQL filtering to change what is displayed.

Related

Is SRM in Google Optimize (Bayesian Model) a thing

So checking for Sample Ratio Mismatch is good for data quality.
But in Google Optimize i can't influence the sample size or do something against it.
My problem is, out of 15 A/B Tests I only got 2 Experiment with no SRM.
(Used this tool https://www.lukasvermeer.nl/srm/microsite/)
In the other hand the bayesian model deals with things like different sample sizes and I dont need to worry about, but the opinions on this topic are different.
Is SRM really a problem in Google Optimize or can I ignore it?
SRM affects Bayesian experiments just as much as it affects Frequentist. SRM happens when you expect a certain traffic split, but end up with a different one. Google Optimize is a black box, so it's impossible to tell if the uneven sample sizes you are experiencing are intentional or not.
Lots of things can cause a SRM, for example if your variation's javascript code has a bug in some browsers those users may not be tracked properly. Another common cause is if your variation causes page load times to increase, more people will abandon the page and you'll see a smaller sample size than expected.
That lack of statistical rigor and transparency is one of the reasons I built Growth Book, which is an open source A/B testing platform with a Bayesian stats engine and automatic SRM checks for every experiment.

The best way to load an openstreetmap .osm in a docker-container

My intentions:
Actually, I intend to:
implement vehicles as containers
simulate/move these containers on the .osm maps-based roads
My viewpoint about the problem:
I have loaded the XML-based .osm file and processed it in python using xml.dom. But I am not satisfied with the performance of loading the .osm file because later on, I will have to add/create more vehicles as containers that will be simulated onto the same road.
Suggestions needed:
This is my first time to solve a problem related to maps. In fact, I need suggestions on how to proceed by keeping in mind, the performance/efficiency, with this set of requirements. Suggestions in terms of implementation will be much appreciated. Thanks in advance!
Simulating lots of vehicles by running lots of docker containers in parallel might work I suppose. Maybeyou're initialising the same image with different start locations etc passed in as ENV vars? As a practical way of doing agent simulations this sounds a bit over-engineered to me, but as an interesting docker experiment it might make sense.
Maybe you'll need a central thing for holding and sharing the state (positions of other vehicles) and serving that back to the multiple agents.
The challenge of loading an .osm file into some sort of database or internal map representation doesn't seem like the hardest part, and because it may be done once on initialisation and imagine it's not the most performance critical part of this.
I'm thinking you'll probably want to do "routing" through the road network (taking account of one ways etc?), giving your agents a purposeful path to follow to a destination. This will get more complicated if you want to model interactions with other agents e.g. you might want to model getting stuck in traffic because other agents are going the same way, and even decisions to re-route because of traffic, so you may want quite a flexible routing system, perhaps self-coded.
But there's lots of open source routing systems which work with OSM data, to at least draw inspiration from. See this list: https://wiki.openstreetmap.org/wiki/Routing#Developers
Popular choices like OSRM are designed to scale up to country size or even global openstreetmap data, but I imagine that's overkill for you (you're probably looking at simulating within a city road network?). Even so. Probably easy enough to get working in a docker container.
Or you might find something lightweight like the code of the JOSM routing plugin easier to embed in your docker image and customize (although I see that's using a library called "JGraphT")
Then working backwards from a calculated route you can calculate interpolated steps along that path which will allow you to make your simulated agents take a step on each iteration (simulated movement)

Image size optimisation in Linux

There is a Yahoo "Smush.it" service that allow to optimise image size.
I am looking for alternative way to reach the same goal using Linux application.
A lot of images need to processed and uploading them manually one by one seems not a good idea.
How such can be done in Linux ?
For jpegs I use JPEGmini and from what I tested it's the one with the best results keeping the same visible image quality while reducing a lot of the size, they have a server version for Linux which is not cheap and I never used.
There's also Mozilla's mozjpeg which you can use directly from the terminal, but it also reduces image quality.
In some tests I did, mozjpeg gives smaller files (not much) than JPEGmini's, but with lower image quality.
If you need to reduce pngs, you could try Trimage or some of the alternatives listed on the same link.
Smush.it's FAQ lists all the tools they are using in their service.

How large is the average delay from key-presses

I am currently helping someone with a reaction time experiment. For this experiment reaction times on the keyboard are measured. For this experiment it might be important to know, how much error could be introduced because of the delay between the key-press and the processing in the software.
Here are some factors that I found out using google already:
The USB-bus is polled at 125Hz at minimum and 1000Hz at maximum (depending on settings, see this link).
There might be some additional keyboard buffers in Windows that might delay the keypresses further, but I do not know about the logic behind those.
Unfortunately it is not possible to control the low level logic of the experiment. The experiment is written in E-Prime a software that is often used for this kind of experiments. However the company that offers E-Prime also offers additional hardware, that they advertise for precise reaction-timing. Hence they seem to be aware about this effect (but do not tell how large it is).
Unfortunately it is necessary to use a standart keyboard, so I need to provide ways to reduce the latency.
any latency from key presses can be attributed to the debounce routine (i usually use 30ms to be safe) and not to the processing algorithms themselves (unless you are only evaluating the first press).
If you are running an experiment where millisecond timing is important you may want to use http://www.blackboxtoolkit.com/ to find sources of error.
Your needs also depend on the nature of your study. I've run RT experiments in Eprime with a keyboard. Since any error should be consistent on average across participants, for some designs it is not a big problem. If you need to sync up the data though with something else (like Eye tracking or EEG) or want to draw conclusions about RT where specific magnitude is important then E-Primes serial resp box (or another brand, though I have had compatibility issues in the past with other brand boxes and eprime) is a must.

DAM Solutions for full-length movies

Are there any DAM (Digital Asset Management) solutions in the market that can handle storage & classification of full length movies (10 GB and greater).
Have heard of solutions from EMC Documentum, Artesia, Oracle UCM and the like but not sure if they handle file sizes this large ? Any open-source systems ?
I am going to go out on a a limb and say 'No'. There may be some custom ones around, but I have not seen anything that could handle videos of that size.
Personally, I have implemented the image portion of Oracle's DAM (still owned by Stellent at the time). I remember the tech being optimized for short streaming videos.
In Aust, ABC recently launched a streaming service;
http://www.abc.net.au/iview/
This, like other I have seen that are similar, seem to be limited to episodes or shows limited to 1/2 or single hour blocks.
Actually, 10gb seems like a crazy size to be entered into a CM system. As CM implies the files will be shared/ available remotely.
Are the videos uncompressed?
Do you want to stream/provide them across the network?
I would be interested to know some more details on the system you are after.

Resources