Currently Im having the time in my access logs as
2018-03-20T04:11:42-07:00
Actually Im bit confused on this notation. What is the T part about? How can I add the time part with milliseconds ?
Related
I have a nodejs based aplication running as a Google App Engine application. It accesses the database using node-postgres module. I have noticed the following:
The first request that I am making from my machine (using postman) is taking longer (around 800 ms- 1.5 seconds). However, the subsequent requests that I am making are taking much lesser time (around 200 ms - 350 ms).
I am unable to pinpoint the exact reason for this happening. It could be due to the following reasons:
A new connection is initiated the first time I make a request to the server.
There is some issue with the database fetching using node-postgres (But since the problem occurs only at the first instance, this is more unlikely).
I am worried about this issue because logs are showing me that almost 20% of my requests are taking around 2 seconds. When I viewed the logs for some of the time taking requests, they seemed to be instantiating a new process which was leading to the longer wait time.
What can I do to investigate further and resolve this issue?
Your first request take more time than the others because App Engine standard has a startup time for a new instance. This time is really short, but there is. You need to add the time to set up the connection to the database. This is why you have a longer response time for the first request.
To understand better the app engine start time you can read the Best practices for App Engine startup time doc (little bit old but I think really clear). And to perform profiling for your app engine application you can read in this Medium public blog.
After this, you can set up a Stackriver dashboard to understand if your 20% of slow requests are due to the start of a new app engine instance.
i have a script that use node js and puppeteer, the script run wonderful on my windows 10 for as long as i dont close it from command line, when i`m using it .
on my VPS it is working for exactly 30 minute , i tried few times and all the time it is exactly 30 minutes, the node js is still functioning but no data is received after 30 minutes, i`m scraping web socket just for the info .
i have tried any args on launch but nothing is keep the connection alive.
Have you tried resetting the websocket connection yourself to bypass the issue? Not sure the application here, but a simple "disconnect - reconnect" every 29 minutes (or every minute for that matter) might just do the trick?
finally i found a solution :)
i guess the sites checking your activity and if you are not active for 30 minutes then they closing any connection that is open , so with puppeteer you can use mouse movement and that is the solution , i put movement in interval and all are fine now, if someone have that issue then just use this method and all are good.
Hitting a db again and again on some time intervals is a big mess as if there are 100k users logged in db will get 1 million request every 10 seconds which i cant afford. I have researched a lot about this issue and need a perfect solution for this.
(Working in NODEJS & PostgreSQL)
Postgres 9.4+ provides logical decoding which gives access to row level changes. You can listen to the write ahead log of postgres and have your application receive data as push from the database.
You may have to build a middleware that does it for you. I found a good write up that talks about utilizing logical decoding and apache kafka streams.
https://www.confluent.io/blog/bottled-water-real-time-integration-of-postgresql-and-kafka/
I developed an application that backed up with nodeJs/mongodb and frontend is in Angularjs 1.6.5. Application is used to provide realtime analytics and data is being increased in size every single minute with new data.
As the database size is increasing, queries are taking much longer time to execute and even sometime fails and giving 404 error. (Nginx server) But the strange thing is that whenever i ran the same query in mongo shell in server with .explain('executionStats') method its giving immediate responses with execution time 0 millisecond.
So as per above screenshots, with limit of 10 data mongo shell executing it in no time but in browser when i am hitting the db through node and mongoose it took 1 min but couldn't possible return the result (maybe nginx returning 404 after specif time)
but i try in mongo shell after that without setting up any limit that also returning data with 6331522 records in less then 4 seconds.
I have no clue what is the issue exactly. Any help would be appriciated
I'm looking to create a custom module which hooks into the Drupal timeout procedure. It needs to fire a quick ping to another server when a user times out - so that they are logged out of the systems on the second server too.
Thing is... I can't find any documentation about how Drupal manages it's timeout. What I have been able to find is all related to PHP.ini.
This leads me to wonder if it's possible to fire an event on timeout at all? Has anyone got experience with this?
Thanks,
Hugh
In /sites/default/settings.php change
ini_set('session.cookie_lifetime', 2000000);
2000000 seconds is 23 days which seems a really stupid default timeout, so you can change it to something like:
3600 (1 hour)
10800 (3 hours)
86400 (1 day)