I have a few GET endpoints related to downloading files from the server. And when we try to simulate the download for smaller size it works perfectly fine ex : 500MB but for 1GB above files it starts the download in gatling and after some time it stopped with error j.i.IOException: Premature also I have verified that it is not something related to the application as it works perfectly fine in Loadrunner.
Observation: file download would take more than 1 minute.
I was looking into the Gatling Config file and updated following but non of them helped to resolve the issue
shutdownTimeout,connectTimeout,handshakeTimeout,pooledConnectionIdleTimeout
The timeout you have to increase is the requestTimeout.
Related
I'm receiving an error when just trying to upload a file into a container in Azure, using the portal. It seems the UI for uploading files changed a bit, so I'm a little inclined to believe the problem may be due to an update, but I just wanted to be sure and see if there's anything I can check or do on my end. Tried a couple different files that I've uploaded, before today, successfully- they errored out as well.
Would anyone know why I'd suddenly be receiving this error?
Failed to upload 1 out of 1 blob(s):
blob_test.xml: Failed to fetch
I noticed this also. It seems like the upload button is enabled too soon. If I select a file or files then press upload right away it often fails. If I wait 10-20 seconds after the upload button is enabled it seems not to fail.
Waiting even for five minutes didn't help me but refreshing the page did
A combination of the two worked for me. refreshed the page + wait for a little while before clicking on upload
I've encountered an issue with my .ini config file. It takes a few hours to finish loading on Linux.
Config load triggered by CDO initialization and performed once.
There are no such problems in any other platforms.
Config contains a few TMap<> containers and has size of 5mb.
Removing data from config fixes issue. But, if its saved again by running application and LoadConfig() triggered on next launch, application will stuck loading in same way.
Changing TMap<> to TArray<> also fixes the issue. But it's not perfect solution for me.
Loading stuck in the loop: PropertyMap.cpp:787 (FMapProperty::ImportText_Internal)
Callstack
Any help would be great.
I have just installed Wordpress in a new Azure Web Application and everything is running fine at the moment.
However, I am getting notifications that I am exceeding the storage file system quota (1GB).
I went to check the value on the Azure Portal and the strange thing is that the storage usage is varying from 12% to 100% every single minute! Somehow the measurement must be crazy, since I am not changing a bit in my website nor have huge files there.
12% must be the realistic value for Wordpress. I have tried to reset the website and check log files to see if there is anything wrong, but did not find anything.
How can I fix that?
Igor.
You might want to try de-activating the most recent plugins you installed 1 by 1 to see if this corrects the issue. There could be a bug in one of them that has some type of loop that generates a large log file.
At the end of the same day the problem disappeared. It seems everything is stable now, but if happened once, it might happen again. I will keep you posted in case I experience this problem again.
Thank you.
I'm trying to follow the contributing instructions for Video.js, and when I attempt to build the project grunt hangs on the minify step. Here's the output:
Running "minify" task
Running "minify:source" (minify) task
Verifying property minify.source exists in config...OK
Files: build/files/combined.video.js, build/compiler/goog.base.js, src/js/exports.js -> build/files/minified.video.js
Writing build/files/minified.video.js...OK
I get no error or message; it just hangs there for minutes. A file called minified.video.js is created in build/files, but it's empty. Any ideas on where I'm going wrong?
Update: I've tried it on 2 3 computers now with the exact same results. Two of the computers are Windows 7 x64 and one is Windows 8 x64. My gut feeling is that closure is choking on the really long command that the build file sends to it, but I'm having a hard time debugging that.
Update 2: I modified the Gruntfile.js to make it output the full command it uses to call Closure, and then I tried running that directly. I got the following result: 0 error(s), 576 warning(s), 82.5% typed. I would think that many warnings points to an issue, but I don't know what it could be. The full output from closure can be found here: http://pastebin.com/GZuFxiqh.
Update 3: I setup a virtual machine running Ubuntu and the build worked flawlessly. So, this is somehow Windows related, but I can't see how.
Well, I never figured out why the build was failing for me. However, as of video.js version 4.4.2, I am able to build the project without problems. Maybe it was a configuration problem on my end, or maybe some bugs got fixed on their end, but this issue is now resolved.
Does anyone have problem getting files from starteam using the cruise control tool for setting up automatic build job??
The script seems to run fine but fail after some time with the error message
Error occurred:
Unable to read data from the network: the connection to the StarTeam server has been lost.
I am not sure whether the problem is the way our Starteam server has been setup, we have 4 licenses shared across the team and the server automatically logs people out if it detects inactivity for a period of time
I've got StarTeam working with cruise control.net so I can tell you that it works, but its a pain, especially the fact that starteam won't remove deleted files from working directories on its own.
I've seen that error once before, but If I remember right, it had to do with an IPSec configuration problem and was immediate, not after a delay. Is the amount of time it takes to fail shorter/longer/same as the timeout for inactivity? Are any/all/none of your source files making it into the working directory before it fails?