I am trying to run basic OAF pages from Oracle Jdeveloper. I have access to an Oracle EBS instance (both database and application server).
The page takes about 12 mins. to appear after I run a page from Jdeveloper.
The basic page which I am running is Hello World page.
Is it that this is an intentional delay created from EBS Instance side.
Please provide some suggestions.
Regards
Abhishek
When you run from JDeveloper it will take time as files gets compiled then get connected to server and all. If you can't wait, upload the compiled files directly to the main server/instance.
The process to deploy a page from within JDeveloper from my experience takes about a minute and sometimes less. 12 minutes is way to much. Something that could cause some delays may be the connectivity between your JDevloper on your local system and the Database server on the central environment. Please check your database and runtime connection under "Default Project Properties". I run Jdeveloper 10.1.3.5
Related
I have a f1-micro gcloud vm instance running Ubuntu 20.04.
It has 0,2 vcpus and 600mb memory.
I write freezing/crashing which stands for just not responding to anything anymore.
From my monitoring i can see that the cpu is at its peak at 40% usage (usually steady under 1%), while the memory is always arround 60% (both stats with my (nodejs) server running).
When i open a ssh connection to my instance and run my (nodejs) server in background everything works fine as long as i keep the ssh connection alive. As soon as i close the connection it takes a few more minutes until the instance freezes/crashes. Without closing the ssh connection i can keep it running for hours without any problem.
I dont get any crash or freeze information from gcloud itself. The instance has a green checkmark and is kind of still running. I just cant open a new ssh connection and also the only way to do something again with this instance is by restarting it.
I have cloud logging active and there are also no messages in there.
So with this knowledge my question is if gcloud somehow boosts ssh connected vms to keep them alive?
Cause i dont know what else could cause this behaviour.
My (nodejs) server uses arround 120mb, another service uses 80mb and the gcp monitoring agent uses 30mb. The linux free command on the instance shows memory available between 60mb and 100mb.
In addition to John Hanley and Mike, You can edit your Machine Type based on you needs.
In the Google Cloud Console, Go to VM Instance under Compute Engine.
Select Instance name to open its Overview page.
Make sure to Stop the Instance before editing Instance.
Select Machine Type that match your application needs.
Save.
For more info and guides you may refer on link below:
Edit Instance
Machine Family Categories
Since there were no answers that explained the strange behaviour i encountered.
I also haven't figured it out but at least my servers wont crash/freeze anymore.
I somehow fixxed it by running my node.js application in an actual background job using forever instead of running it like node main.js &.
I have an almost static site that i was happy hosting on Storage blob. However, i need to have php script run to support email communication through the contact html form.
So i decided to buy the smallest VM which is B1Is which has 1 CPU and 0.5 GB of memory. I RDP to the server and to my astonishment I cannot even open one file or folder or Task Manager without waiting endlessly before the "Out of memory ...please try to close programs or restart all"!
The Azure team should not sell such a VM if it will be nonfunctional from the get go. Note that i installed ZERO programs on it.
All i want is php and setup the site on IIS. And add a certificate license to it. NO Database or any other programs will run.
What should i do?
Apparently it is because "1 B1ls is supported only on Linux" based on the notes on their page.
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/sizes-general
My working system is Centos7. Now I'am develop a web application, it use Oracle Database. So I use instantclient 11.2 to connect the Oracle on my Centos7, it use long time to connect! Although connect successful, but I have to wait for at least 3 minute! The web application startup will init the datasource, for each datasource, it also use at least 3 minute to finish! How odd that my colleague connect or init datasource is really quick by use Windows OS(only I use linux)! The database installed at the linux server(also Centos7). we all use the same network environment! How can I fix this problem? I love Linux so much that I don't want to change to Windows for working! Thank for helping me to solve this problem!(I am so sorry that my English is very bad )
Here is the problem: We have a client that uses Progress Openedge database, we need to execute queries on this database from our servers.
Currently the drivers are installed on our Windows server, and the PHP code uses ODBC to run the queries.
Now we would like to move the code to a Linux server. We tried before to work with their linux drivers but that attempt has failed.
The question is, Is it possible somehow to run PHP code on a linux server, this code communicates with the Windows server, runs the query on the Windows server, and return the results to Linux?
How would you access to this problem.
Thanks!
Yes, it's possible. Your question boils down to "how can my Linux server ask my Windows server to do something" (where the "something" happens to be "talk to a database"), and there are a variety of ways to accomplish that. You could run a web service (RESTful or SOAP) on the Windows server, for example.
Make sure you think about security: if you deploy a service on your Windows server that lets remote clients modify a database, you have to be mindful of which remote clients are allowed to use that service. The last thing you want to do is accidentally allow random strangers to run arbitrary queries against your database.
We have a Knowledgebase Article detailing some setup procedures for Linux installations; it also has a video explaining some aspects of the setup. If the other answered haven't provided a complete solution for you, hopefully our article can at least get you started in the right direction.
Also keep in mind that depending on your version of OE, the driver libraries may be different.
System Spec:
VPS running Windows Server 2008 R2 SP1
64-bit dual core 2.39GHz VCPU
2GB RAM
Parallels Plesk for Windows 10.4.4
IIS 7.5
PHP 5.2.17
MySQL 5.1.56
I have a PHP script to loop through a static file and import each line as a row in MySQL. This works fine if the file is split into several thousand lines at a time, but this creates a lot of manual effort.
The whole file contains around 160,000 lines to be imported. The script currently connects to the database via mysql_connect / mysql_select_db, processes the loop with mysql_query, and disconnects at the end of the loop. However, at any point between around 55 seconds - 1 min 35 seconds, the client browser returns a 500 Internal Server Error page, which contains no useful diagnostic info.
I have tried increasing the max connection times of MySQL, PHP, IIS and even the max user sockets for winsock, to no avail.
I tried performing a connect / disconnect to MySQL for each insert query, but this caused thousands of connections to the server which were then stuck in a "TIME_WAIT" state, and returned a "could not connect to server" error, presumably due to insufficient sockets remaining. I have also tried both the mysql and mysqli extensions.
I have looked through all the logs I can find for IIS and MySQL, but cannot see anything that would help with finding the cause.
The last two attempts inserted 33,979 and 78,173 rows respectively.
Can anyone offer any assistance?
Thanks.
** UPDATE **
This must be an IIS issue. I have converted the script to run via command-line PHP and it processes the whole file with no issues.
Sounds like a IIS issue. Most I have found reside in the Web.config file. I would take a look at that and make sure the settings are correct and the syntax is correct. Many a time I forgot to close my tags and received a 500 error.
Use LOAD DATA INFILE instead of trying to do the INSERTs via PHP. It will run a lot faster, thereby avoiding the 500 error.
Do not even consider using the mysql_* interface. Switch to mysqli or PDO. It is deprecated and gone in the latest PHP release.