500 Internal Server Error when performing large MySQL Insert with PHP / IIS7 - iis-7.5

System Spec:
VPS running Windows Server 2008 R2 SP1
64-bit dual core 2.39GHz VCPU
2GB RAM
Parallels Plesk for Windows 10.4.4
IIS 7.5
PHP 5.2.17
MySQL 5.1.56
I have a PHP script to loop through a static file and import each line as a row in MySQL. This works fine if the file is split into several thousand lines at a time, but this creates a lot of manual effort.
The whole file contains around 160,000 lines to be imported. The script currently connects to the database via mysql_connect / mysql_select_db, processes the loop with mysql_query, and disconnects at the end of the loop. However, at any point between around 55 seconds - 1 min 35 seconds, the client browser returns a 500 Internal Server Error page, which contains no useful diagnostic info.
I have tried increasing the max connection times of MySQL, PHP, IIS and even the max user sockets for winsock, to no avail.
I tried performing a connect / disconnect to MySQL for each insert query, but this caused thousands of connections to the server which were then stuck in a "TIME_WAIT" state, and returned a "could not connect to server" error, presumably due to insufficient sockets remaining. I have also tried both the mysql and mysqli extensions.
I have looked through all the logs I can find for IIS and MySQL, but cannot see anything that would help with finding the cause.
The last two attempts inserted 33,979 and 78,173 rows respectively.
Can anyone offer any assistance?
Thanks.
** UPDATE **
This must be an IIS issue. I have converted the script to run via command-line PHP and it processes the whole file with no issues.

Sounds like a IIS issue. Most I have found reside in the Web.config file. I would take a look at that and make sure the settings are correct and the syntax is correct. Many a time I forgot to close my tags and received a 500 error.

Use LOAD DATA INFILE instead of trying to do the INSERTs via PHP. It will run a lot faster, thereby avoiding the 500 error.
Do not even consider using the mysql_* interface. Switch to mysqli or PDO. It is deprecated and gone in the latest PHP release.

Related

Inconsistency Errors in kombu using celery and redis with the key '_kombu.binding.reply.celery.pidbox'

I have two Django sites (archive and test-archive) on one machine. Each has its own virtual environment and different celery queues and daemons, using Python 3.6.9 on Ubuntu 18.04, Django 3.0.2, Redis v 4.0.9, celery v 4.3, and Kombu v4.6.3. This server has 16 GB of RAM, and under load there is at least 10GB free and swap is minimal.
I keep getting this error in my logs:
kombu.exceptions.InconsistencyError:
Cannot route message for exchange 'reply.celery.pidbox': Table empty or key no longer exists.
Probably the key ('_kombu.binding.reply.celery.pidbox') has been removed from the Redis database.
I tried
downgrading Kombu to 4.5 for both sites per some stackoverflow posts
and setting maxmemory=2GB and maxmemory-policy=allkeys-lru in redis.conf per celery docs (https://docs.celeryproject.org/en/stable/getting-started/backends-and-brokers/redis.html#broker-redis); originally the settings were the defaults of unlimited memory and noeviction and these errors were present for both versions of kombu
I still get those errors when one site is under load (i.e. doing something like uploading a set of images and processing them) and the other site is idle.
What is a little strange is that on some test runs using test-archive, test-archive will not have any errors, while archive will show those errors, even though the archive site is not doing anything. On other identical test runs using test-archive, test-archive will generate the errors and archive will not.
I know this is a reported bug in kombu/celery, so I am wondering if anyone has a work around that works more often than not for this configuration. What versions of celery, kombu, redis, etc. seem to work more often than not? I am happy to share my config files or log files, but there are so many I thought it would be best to start this discussion with the problem statement and my setup and see what else is needed.
Thanks!

Idle Oracle connections give error 'ORA-03114: not connected to ORACLE'

We have a Node.js web application that connects to an Oracle DB instance, The problem is after some inactivity, connections of the database is turned to read-only mode. It means SELECT operations work but INSERT and UPDATE transactions encounter this error:
"Error: ORA-03114: not connected to ORACLE"
This problem solves after restarting the application. We use the last version of knex(0.20.1) and node-oracledb(4.1.0) library to connect to the database.
The error means that something (probably a firewall) has expired a connection. You should track down the cause and eliminate it. There may be work arounds such as configuring the Oracle Net layer to send occasional pings across the network to stop idle connections from being terminated, see https://oracle.github.io/node-oracledb/doc/api.html#connectionha
Both queries and DMLs will be equally affected on the connection that gives the error - all will fail. I suspect you are using a different (new) connection for the query.
If you are using 19c client libraries (which, by the way, connect to Oracle DB 11.2 or later), then your connection string could use Easy Connect syntax like:
"mydbmachine.example.com/orclpdb1?expire_time=2"
This will perform a keep alive operation on idle connections, sending probes every two minutes. The general recommendation is to set the period to just less than half the time that connections will be terminated (e.g. by a firewall). See the tech article Oracle Database 19c Easy Connect Plus Configurable Database Connection
Syntax.
Other syntaxes can be used in older versions, or in tnsnames.ora files; check the doc.
I have faced same problem and found the solution.
Add environment variables (system variables)
ORACLE_HOME=installation path in my case F:\app\krushna\product\11.2.0\dbhome_1
and ORACLE_SID= orcl or xe .
Which ever you have.
It worked for me.

oracle temporary ora 12505 error after linux startup

I am experiencing a very strange behavior with oracle, maybe somebody can help me, let me summarize it real quick:
My OS of choice is debian linux, I am using Oracle XE 11.0.2.0. On linux startup, I run a script file which is located under /etc/init.d/. I added the following line to make oracle start on system start:
/etc/init.d/oracle-xe start
Right after this line , I run my application from the script, my application heavily relies on the oracle db, therefore once oracle starts, I am positive that my application will run ok. Unfortunately my assumption seems wrong.Here's why: I set up similar set up in 3 machines, in 2 of them I see weird behavior, after system start oracle db is not responding to connection requests, Even though oracle-xe start command completed executing.
My observation is the following, if I run my application right after oracle-xe start is executed, I receive ora-12505 errors at least for a minute: "TNS listener does not currently know of SID" . After a minute everything stabilizes, and my application starts working ok. 1 minute without a db on system startup is not acceptable for me performance-wise, therefore I am trying to solve this problem.
Surprisingly it does not happen in one of the other linux boxes I have here, I am not quite sure what is different on that box. I compared ora files, but couldn't find any difference, it seems like a wild goose chase...
I would be so grateful if anybody has experienced and solved ths problem before and shares that valuable solution with me.
I think I found the problem, looks like I am starting oracle-xe instance before I assign network interfaces an IP address, in that case it takes some time for oracle to receive connections, that requires me to set static ip on the linux boxes, which is something I don't want. Is there a solution so that I can still assign IP addresses later on?

FTP suddenly refuses connection after multiple & sporadic file transfers

I have an issue that my idiot web host support team cannot solve, so here it is:
When I'm working on a site, and I'm uploading many files here and there (small files, most of them a few dozen lines at most, php and js files mostly, with some png and jpg files), after multiple uploads in a very short timeframe, the FTP chokes on me. It cuts me off with a "refused connection" error from the server end as if I am brute-force attacking the server, or trying to overload it. And then after 30 minutes or so it seems to work again.
I have a dedicated server with inmotion hosting (which I do NOT recommend, but that's another story - I have too many accounts to switch over), so I have access to all logs etc. if you want me to look.
Here's what I have as settings so far:
I have my own IP on the whitelist in the firewall.
FTP settings have maximum 2000 connections at a time (Which I am
nowhere near close to hitting - most of the accounts I manage
myself, without client access allowed)
Broken Compatibility ON
Idle time 15 mins
On regular port 21
regular FTP (not SFTP)
access to a sub-domain of a major domain
Anyhow this is very frustrating because I have to pause my web development work in the middle of an update. Restarting FTP on WHM doesn't seem to resolve it right away either - I just have to wait. However when I try to access the website directly through the browser, or use ping/traceroute commands to see if I can reach it, there's no problem - just the FTP is cut off.
The ftp server is configured for such a behavior. If you cannot change its configuration (or switch to another ftp server program on the server), you can't avoid that.
For example vsftpd has many such configuration switches.
Going to something else like scp or ssh should help
(I'm not sure that calling idiot your web support team can help you)

QSslSocket timeouts in Ubuntu Server, but not in Desktop

We have problem with our Qt based production server for our business application. When total SSL connections increases with time, some clients does not manage to connect at all.
QSslSocket::waitForEncrypted() starts to fail with no QSslError, regardless of that timeout where set. There are more then ~100 active connections when this problem starts to kick in.
So there are ~170 connections, twice of threads, and "lsof" mentions a little more then 1000 opened files (we had to increase file "ulimit" for that..).
It does not look like it's clients problem, since IPs that are failing and reconnecting changes with time (some "leaps in" with success, but then other don't).
As mentioned, this happens in Ubuntu Server (Zentyal 10.04 and "vanilla" 9.10), but does NOT in Ubuntu Desktop 9.10.
Everything runs inside VMWare ESX 4.1, systems there tested with same resources attached. System loads stays below 1.0. Daemon runs with root permissions.
It looks like it's something with "server"/"desktop" kernel or other configuration differences, but I couldn't tell what exactly could make SSL connection not to handshake... in "server editions"...
We are using Qt 4.5.3 compiled by ourselves.
EDIT: after all it's the same on any Linux I tried. It feels like it's some kind socket limit per process, witch is about 1016 - other_opened_files. I'll try to create new question about that.
EDIT 2: It's select and FD_SETSIZE limit problem...
Problem is with fact that Qt uses select() which is limited with FD_SETSIZE macro for maximum selected sockets/files. I had to change FD_SETSIZE value inside /usr/include/bits/typesizes.h before compiling libQtNetwork and libQtCore.

Resources