Something strange is happening with my app, I am using SailsJs with official PostgreSQL driver and my data gets deleted. I don't have any pattern or list of specific events which deletes the data but I have following observations.
Few days back i was writing a function to destroy data and when I
executed that function it gave me an error I fixed the error and ran
my web app again and whoa data from one of my table was all gone.
Yesterday i wrote a function and I tried to get the HTTP call to that
function but it was giving me 500 server error, I started debugging it
and after executing my program 3 to 4 times with this error partial
data was deleted from one of my database table. Later the error was i
had a typo in URL.
If any of you guys had any experience with what is happening to me please let me know how to fix it? or at least help me on how to reproduce this issue ?
EDIT
I activated the logs and was waiting for it to happen again and it happened again and here is the log from sailsjs
In the logs I saw that its talking about alter.js sync strategy but i have selected it to be the safe strategy
It has happened to me quite a few times, when lifting the app and it is in the process of making changes to the db and it fails, sometimes due to ORM timeout.
What sails do when its lifting and needs to update the data structure is controlled in config/models.js migrate: 'alter', usually commented out, you get a prompt for what to do 1... 2... 3... (writing from the top of my head, i dont remember the actual messages) and a warning about using alter on a production system.
Changing
config/orm.js to have this
// config/orm.js
module.exports.orm = {
_hookTimeout: 60000 // I used 60 seconds as my new timeout
};
And for reasons I don't know changing config/pubsub.js
// config/pubsub.js
module.exports.pubsub = {
_hookTimeout: 60000 // I used 60 seconds as my new timeout
};
has helped me, avoid data loss.
Related
In my onDisbale() method in my Main class I have a loop which creates and starts new BukkitRunnables.
I'm getting a error in console: org.bukkit.plugin.IllegalPluginAccessException: Plugin attempted to register task while disabled I need to somehow wait in my onDisable() method until all the BukkitRunnables I create in the loop are finished. How to do that?
My code looks like this:
#Override
public void onDisable() {
for (Player p : Bukkit.getOnlinePlayers()) {
new PlayerDataSaverRunnable().runTaskAsynchronously(this);
}
}
The onDisable method is the very last thing that gets called before your plugin is disabled and the server shuts down. So, as the error message says, you unfortunately can't schedule any new tasks in the onDisable function.
You mentioned in a comment that you were trying to write to a file in the plugins folder, and under normal circumstances you'd want to do that asynchronously. But, because onDisable only ever gets called when the server is shut down (or when the entire server is reloaded with /reload), it's perfectly fine to run code here that blocks the main thread and could potentially take a few seconds to run — in the case of a shutdown, by the time this method gets called, all the players will have already been kicked off the server, and so there's no "lag" to complain about. If your plugin is super advanced and has to save a bunch of stuff, I don't think any server owners would complain even if it took 10 or so seconds to disable.
Of course, you would have to be saving something crazy for it to take a whole 10 seconds to save. More than likely, most files will save in just a few milliseconds.
If you're really dead-set on disabling the plugin as fast as possible, you might consider having some async task that runs every 5 minutes or so and auto-saves the files. Then, in onDisable, you could only save files that changed since the auto-saver was last run. That's a good practice anyways, just incase the server crashes or the power goes out and the onDisable method doesn't get a chance to run. But, then again, I would still recommend that you save everything in the onDisable method (that's what I do for all of my plugins, as well), even if it will take a few seconds and block the main thread, just so you can be 100% sure that everything gets saved correctly.
I am working on a Chromecast custom receiver app, built on top of the sample app provided by Google (sampleplayer.CastPlayer)
The app manages a playlist, I would like the player to move on to the next item in the list after a video fails to play for whatever reason.
I am running into a situation where, after a video fails to load because of a network error, the player becomes unresponsive: in the 'onError_()' handler, my custom code will do this
var queueLoadRequest = ...
var mediaManager = ...
setTimeout (function(){mediaManager.queueLoad(queueLoadRequest)}), 5000
...the player does receive the LOAD event according to the receiver logs, but nothing happens on the screen, the player's status remains IDLEand the mediaManager.getMediaQueue().getItems() remains undefined. Same result trying to use the client controller to try to load a different video.
I have tried to recover with mediaManager.resetMediaElement() and player.reset() in the onError_ handler, but no luck.
For reference, here is a screenshot of the logs (filtered for errors only) leading up to the player becoming unresponsive. Note that I am not interested in fixing the original error, what I need to figure out is how to recover from it:
My custom code is most likely responsible for the issue, however after spending many hours + stripping the custom code to a bare minimum in an effort to isolate the responsible bit of code, I have not made any progress. I am not looking for a fix but rather for some guidance in troubleshooting the root cause: what could possibly cause the Player to become unresponsive? or alternatively how can one recover from an unresponsive Player?
Mongoose fails to make insertion when the insertMany command is used to insert documents into the database. I have around 2000 documents which I want to insert and instead of saving each one of them one by one I am trying to use the insertMany function for saving it.
If no specific index is defined then it takes a huge time to just save it in the database and if an index is defined the connection gets timed out as soon as the insertion operation takes place.
Model.insertMany(documents, function(batchSaveError, savedDocs) {
if (batchSaveError) {
callback(batchSaveError);
} else {
callback(null);
}
});
This is the code that I am trying to get done.
The issue seemed pretty vague. The connection timeout seemed pretty normal and can happen in any scenario. But whenever the connection times out mongoose tries to reconnect all by itself.
What I was missing was, I was not capturing the error event for the connection safely and that was causing the whole application to crash. Just when I added a proper catch statement for the error event, although it timed out a few times but the insertion went pretty fine and smoot.
I have made a nodejs application using sails.js. It's working perfectly in my localhost. The problem appears in production when I try to publish it in the server(modulus). You can take a look the error below.
Error: The hook `pubsub` is taking too long to load.
Make sure it is triggering its `initialize()` callback, or else set `sails.config.pubsub._hookTimeout to a higher value (currently 20000)
at tooLong [as _onTimeout] (/mnt/data/1/ApiDevConf-master/node_modules/sails/lib/app/private/loadHooks.js:92:21)
at Timer.listOnTimeout (timers.js:110:15) { [Error: The hook `pubsub` is taking too long to load.
Make sure it is triggering its `initialize()` callback, or else set `sails.config.pubsub._hookTimeout to a higher value (currently 20000)] code: 'E_HOOK_TIMEOUT' }
I have tried to figure out how to solve the problem but nothing works. I was trying somethink like this here.
Also I have properly set the NODE_ENV = production
Thanks for your time.
It sounds like this could be one of two issues.
1.) You need to set your migrate setting in config/model.js to something besides alter. You should have migrate: 'safe' on in production mode. This should happen automatically if the NODE_ENV variable is set to production.
The reason it times out is every time you start the server Sails will try and migrate your existing data to the current schema. Obviously don't want this in production.
2.) You have a lot of files to load and Modulus is slow to read them from it's virtual disk. This is a bigger issue because it will take a very long time for your server to start each time you need to restart it. You can bump the global timeout limit and that should give you more time. To do that add the following to your config/env/production.js file:
module.exports = {
hookTimeout: 40000
}
I am using the following transaction
var transactionScopeOptions = new TransactionOptions() { IsolationLevel = IsolationLevel.ReadUncommitted, Timeout = Timeout = new TimeSpan(0, 10, 0) };
using (TransactionScope transactionScope = new TransactionScope(TransactionScopeOption.Required, transactionScopeOptions))
{
/* update query here with data context execute command */
}
And I keep getting The operation is not valid for the state of the transaction exception with the inner exception of Transaction Timeout.
Locally I only get in 1 in 100,000 chances, but in the server it happens like every now and then. The application is running on MSMQ and WCF.
[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single,
ConcurrencyMode = ConcurrencyMode.Multiple)]
I have tried the following:
<system.transactions>
<machineSettings maxTimeout="02:00:00" />
</system.transactions>
and setting the dataContext.CommandTimeout to 1 hour or 0 (infinite).
Changed the Connection Timeout=3600 in the app.config
I have tried almost everything I've read on Google but still no luck. Hope I can remove this problem in the server.
NOTE: update query usually last between 0 to 20 seconds (max) but since it's multithreading, it causes the error. If I ran the ones in the exception, it doesn't seem to get the error anymore either because the transaction doesn't time out.
EDIT:
All my queries have dirty read (nolock for reads and rowlock for updates)
EDIT:
Component Services > My Computer > Options > Transaction Timeout (seconds): 600
Then restarted "Distributed Transaction Coordinator" in services.
Still no luck
EDIT:
Timestamp was added when entering the transactionScope and when getting the exception (Transaction TimeOut) it seems it is not really time out related because it is kicked out less than a minute from entering the transaction scope (when I already specified the transaction to have a time out in 10 minutes).
This means it just gives out Transaction TimeOut exception even if it really didn't timed out.
EDIT:
Based on the last error, I tried adding the following, based from this website to my connection string:
Transaction Binding=Explicit Unbind
Though I am using SQL Server 2012 and latest Visual Studio Framework, so not sure it helps at all.
EDIT:
I also have the following:
In the app config:
<system.transactions>
<defaultSettings timeout="2:00:00" />
</system.transactions>
binding services:
<binding name="FooBinding"
deadLetterQueue="System"
maxReceivedMessageSize="524288"
exactlyOnce="true"
receiveErrorHandling="Move"
retryCycleDelay="00:00:30"
maxRetryCycles="120"
receiveRetryCount="0"
sendTimeout="00:01:00">
and
<serviceTimeouts transactionTimeout="00:10:00" />
EDIT:
I tried changing the config to the following, hoping it was a WCF timeout (default is 1 minute) but still get those exceptions: (extend timeout to 10 minutes)
<binding name="FooBinding"
deadLetterQueue="System"
maxReceivedMessageSize="524288"
exactlyOnce="true"
receiveErrorHandling="Move"
retryCycleDelay="00:00:30"
maxRetryCycles="120"
receiveRetryCount="0"
closeTimeout="00:10:00"
openTimeout="00:10:00"
receiveTimeout="00:10:00"
sendTimeout="00:10:00" />
I asked a related question with some code logic in another link.