Server process being killed on a Linux Digitalocean VM - node.js

I am trying to run a Next.js server on a DigitalOcean virtual machine. The server works, but when I run npm run start, the logs say Killed after ~1 minute.
Here is an example log of what happens:
joey#mydroplet:~/Server$ sudo node server
info - SWC minify release candidate enabled. https://nextjs.link/swcmin
event - compiled client and server successfully in 3.3s (196 modules)
wait - compiling...
event - compiled client and server successfully in 410 ms (196 modules)
> Ready on https://localhost:443
> Ready on http://localhost:8080
wait - compiling / (client and server)...
event - compiled client and server successfully in 1173 ms (261 modules)
Killed
joey#mydroplet:~/Server$
After some research, I came across a couple of threads which detail a server lacking enough memory/resources to continue the operation. I upgraded the memory from 512 mb to 1 gb, but this still happens.
Do I need to further upgrade the memory?
This is the plan that I am on:

It was the memory. Upgrading the memory of the server from 1 gb to 2 gb solved this problem.
This is the plan that worked for me:

Related

vps server Modern browser wont start linux

I have a linux vps running at Strato with a desktop (xfce4)
But when I try to run a modern browser chrome / firefox It wont start.
firefox:[GFX1-]: glxtest: cannot access /sys/bus/pci
[GFX1-]: Compositor thread not started (true)
As root:
invalid MIT-MAGIC-COOKIE-1 keyUnable to init server: Could not connect: Verbinding is geweigerd
Error: cannot open display: :10.0
or get segmentation errors.
I have 4 GB ram, with 2GB used, so no ram issues.
Any idea?

How to reduce I/O and memory usage when installing node.js on shared hosting?

I've tried to install node.js via Putty on my shared hosting account with cPanel and CloudLinux. But at some moment I/O and physical memory usage reached to their limits and the installation process was stopped. My I/O usage limit is 10 MB and physical memory limit is 512 MB.
This happens when Putty displays the lines:
-c -o /home/vikna211/node/out/Release/obj.target/v8_base/deps/v8/src/api.o
../deps/v8/src/api.cc
After that I see:
make[1]:
[/home/vikna211/node/out/Release/obj.target/v8_base/deps/v8/src/api.o]
Interrupt make[1]: Deleting intermediate file
`4095d8cbfa2eff613349def330937d91ee5aa9c9.intermediate' make: [node]
Interrupt
Is it possible to reduce the usage of both resources when installing node.js to successfully finish the process?
And maybe it's not a problem of memory. Maybe the process tries to delete that intermediate file, but can't do it and causes the memory crash.

Neo4j refused to connect

Characteristics :
Linux
Neo4j version 3.2.1
Access on remote
Installation
I Had install neo4j and gave the folder chmod 777 .
Im running it remotely on my machine and I had already enabled non local access
Doing NEo4j start i get this message
Active database: graph.db
Directories in use:
home: /home/cloudera/Muna/apps/neo4j
config: /home/cloudera/Muna/apps/neo4j/conf
logs: /home/cloudera/Muna/apps/neo4j/logs
plugins: /home/cloudera/Muna/apps/neo4j/plugins
import: /home/cloudera/Muna/apps/neo4j/import
data: /home/cloudera/Muna/apps/neo4j/data
certificates: /home/cloudera/Muna/apps/neo4j/certificates
run: /home/cloudera/Muna/apps/neo4j/run
Starting Neo4j.
WARNING: Max 1024 open files allowed, minimum of 40000 recommended. See the Neo4j manual.
Started neo4j (pid 9469). It is available at http://0.0.0.0:7474/
There may be a short delay until the server is ready.
See /home/cloudera/Muna/apps/neo4j/logs/neo4j.log for current status.
and it is not connecting in the browser .
running neo4j console
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 409600000 bytes for AllocateHeap
# An error report file with more information is saved as:
# /home/cloudera/hs_err_pid18598.log
where could the problem be coming from ?
Firstly, you should set the maximum open files to 40000, which is the recommended value. Then you do not get the WARNING. Like this: http://neo4j.com/docs/1.6.2/configuration-linux-notes.html
Secondly,'failed to allocate memory' means that the Java virtual machine cannot allocate the memory you start it with.
It can be a misconfiguration, or you physically do not have enough memory.
Please read the memory sizing guidelines here:
https://neo4j.com/docs/operations-manual/current/performance/

Azure Service Fabric: cannot run local Service Fabric Cluster

I have a problem with Azure Service Fabric.
I have installed it (on Windows 7) as it was said in https://azure.microsoft.com/en-gb/documentation/articles/service-fabric-get-started/.
Then I have tried to run a Service Fabric application from Visual Studio 2015. I got an error “Connect-ServiceFabricCluster : No cluster endpoint is reachable, please check if there is connectivity/firewall/DNS issue”.
Here is the fill log of that run:
1>------ Build started: Project: Application2, Configuration: Debug x64 ------
2>------ Deploy started: Project: Application2, Configuration: Debug x64 ------
-------- Package started: Project: Application2, Configuration: Debug x64 ------
Application2 -> c:\temp\Application2\Application2\pkg\Debug
-------- Package: Project: Application2 succeeded, Time elapsed: 00:00:01.7361084 --------
2>Started executing script 'Set-LocalClusterReady'.
2>Import-Module 'C:\Program Files\Microsoft SDKs\Service Fabric\Tools\Scripts\DefaultLocalClusterSetup.psm1'; Set-LocalClusterReady
2>--------------------------------------------
2>Local Service Fabric Cluster is not setup...
2>Please wait while we setup the Local Service Fabric Cluster. This may take few minutes...
2>
2>Using Cluster Data Root: C:\SfDevCluster\Data
2>Using Cluster Log Root: C:\SfDevCluster\Log
2>
2>Create node configuration succeeded
2>Starting service FabricHostSvc. This may take a few minutes...
2>
2>Waiting for Service Fabric Cluster to be ready. This may take a few minutes...
2>Local Cluster ready status: 4% completed.
2>Local Cluster ready status: 8% completed.
2>Local Cluster ready status: 12% completed.
2>Local Cluster ready status: 17% completed.
2>Local Cluster ready status: 21% completed.
2>Local Cluster ready status: 25% completed.
2>Local Cluster ready status: 29% completed.
2>Local Cluster ready status: 33% completed.
2>Local Cluster ready status: 38% completed.
2>Local Cluster ready status: 42% completed.
2>Local Cluster ready status: 46% completed.
2>Local Cluster ready status: 50% completed.
2>Local Cluster ready status: 54% completed.
2>Local Cluster ready status: 58% completed.
2>Local Cluster ready status: 62% completed.
2>Local Cluster ready status: 67% completed.
2>Local Cluster ready status: 71% completed.
2>Local Cluster ready status: 75% completed.
2>Local Cluster ready status: 79% completed.
2>Local Cluster ready status: 83% completed.
2>Local Cluster ready status: 88% completed.
2>Local Cluster ready status: 92% completed.
2>Local Cluster ready status: 96% completed.
2>Local Cluster ready status: 100% completed.
2>WARNING: Service Fabric Cluster is taking longer than expected to connect.
2>
2>Waiting for Naming Service to be ready. This may take a few minutes...
2>Connect-ServiceFabricCluster : **No cluster endpoint is reachable, please check
2>if there is connectivity/firewall/DNS issue.**
2>At C:\Program Files\Microsoft SDKs\Service
2>Fabric\Tools\Scripts\ClusterSetupUtilities.psm1:521 char:12
2>+ [void](Connect-ServiceFabricCluster #connParams)
2>+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2> + CategoryInfo : InvalidOperation: (:) [Connect-ServiceFabricClus
2> ter], FabricException
2> + FullyQualifiedErrorId : TestClusterConnectionErrorId,Microsoft.ServiceFa
2> bric.Powershell.ConnectCluster
2>
2>Naming Service ready status: 8% completed.
2>Naming Service ready status: 17% completed.
2>Naming Service ready status: 25% completed.
2>Naming Service ready status: 33% completed.
2>Naming Service ready status: 42% completed.
2>Naming Service ready status: 50% completed.
2>Naming Service ready status: 58% completed.
2>Naming Service ready status: 67% completed.
2>Naming Service ready status: 75% completed.
2>Naming Service ready status: 83% completed.
2>Naming Service ready status: 92% completed.
2>Naming Service ready status: 100% completed.
2>WARNING: Naming Service is taking longer than expected to be ready...
2>Local Service Fabric Cluster created successfully.
2>--------------------------------------------------
2>Launching Service Fabric Local Cluster Manager...
2>You can use Service Fabric Local Cluster Manager (system tray application) to manage your local dev cluster.
2>Finished executing script 'Set-LocalClusterReady'.
2>Time elapsed: 00:07:01.8147993
2>The PowerShell script failed to execute.
========== Build: 1 succeeded, 0 failed, 1 up-to-date, 0 skipped ==========
========== Deploy: 0 succeeded, 1 failed, 0 skipped ==========
This worked for me and I stumbled upon the solution on an MSDN forum.
Most likely your C++ runtime is corrupt and that needs to be reinstalled.
You will have to manually execute vcredist_x64.exe which can be found at
C:\Program Files\Microsoft Service Fabric\bin\Fabric\Fabric.Code\vcredist_x64.exe
Once that is done, you can choose to reboot your machine or not, I chose to reboot it and then ran the following commands
C:\Program Files\Microsoft SDKs\Service Fabric\ClusterSetup\CleanCluster.ps1
C:\Program Files\Microsoft SDKs\Service Fabric\ClusterSetup\DevClusterSetup.ps1
I hope this helps!
I had to disconnect my VPN from my office as well as the directions that Varun had shown above.
Thanks #Varun for sharing this!!
Once I connected my VPN I could not run the system again.
Hope this helps someone.
After trying many solutions, I found that this GitHub Issue helped to add an entry to the setup configuration then I had to recreate the cluster.
Configuration File
C:\Program Files\Microsoft SDKs\Service Fabric\ClusterSetup\NonSecure\OneNode\ClusterManifestTemplate.json:
Entry to add
{
"name": "FabricContainerAppsEnabled",
"value": "false"
}
By default Service Fabric development instance is running to listen only on the loopback address(simply run netstat -an command in your CMD to see which ports are open):
As you can see on the screen above my SF instance are listen on the port 19000 for all addresses(available external connections)
By default port 19000 listen only on the loopback address(127.0.0.1)
[::1]:19000
All you need is to change your IPAddressOrFQDN from localhost to your IPv4 external address. The hange should be visible in manifest file:
All SF cluster configuration files in windows 7 are located in C:\SfDevCluster\Data. There you can find many XML documents like:
clusterManifest
FabricHostSettings
To change IPAddressOrFQDN I have replaced all occurrences of the word 'localhost' over 'my.ip.address.here'. The files to be modified are also deeper:
C:\SfDevCluster\Data\_Node_0\Fabric
C:\SfDevCluster\Data\_Node_0\Fabric\Fabric.Data
C:\SfDevCluster\Data\_Node_0\Fabric\Fabric.Config.0.0
and so on
before changes stop your cluster:
And after all run it again:
There is another - better method. You can change generating scripts in directory C:\Program Files\Microsoft SDKs\Service Fabric\ClusterSetup.
After cluster reset this settings will be lost. Remember that you do it at your own risk, the development version is unsecure and should not be used in a production environment.
If after hanges you cannot connect to SF - make sure the firewall doesn't block the port(you can temporary disable your firewall)
If the C++ runtime does not solve your issue you can narrow the solution down to just firewall issue. The network is not allowing the User to run the FabricHostSvc Service which you can check in services.msc ,which is due to security issues. If you just are able to disable your firewall/change to network without restrictions the issue should resolve on its own.
If you are able to get the FabricHostSvc started your issue will be solved.
Hope this helps.....
Hey First answer ( #Varun Rathore gave ) worked for me. But as I was trying to deploy container on service fabric locally, switching from 1 node cluster to 5 node cluster again and again, this error comes often. So what I did is open powershell as Administrator and follow the next 2 steps.
First go to following Path
cd "C:\Program Files\Microsoft SDKs\Service Fabric\ClusterSetup"
Second run these 2 files
.\CleanCluster.ps1
.\DevClusterSetup.ps1
This will setup 1 node cluster on your local machine.
You can do this from service fabric cluster manager but it did not work for me. And every time repair vcredist_x64.exe file and reboot your computer (windows 10 Pro) is frustating. This works for me.
Either you can reset fabric cluster (if you don't have stateful service, and data persist not needed), or else you can re-launch fabric cluster, sometime switching nodes also helpful.

Nodejs to Couchbase Server timed out

My app is actually running fine when I started it and keep an eye on it for few hours.
BUT later time(I'm not sure what exact time of inactivity), It does show "Server timed out" (I crop some logs below)
[ERROR] (server - L:463) <node.mycouchbase.server:11210> (SRV=0x2111c60,IX=4) Server timed out. Some commands have failed
[INFO] (confmon - L:166) Not applying configuration received via CCCP. No changes detected. A.rev=152466, B.rev=152466
[INFO] (cccp - L:110) Re-Issuing CCCP Command on server struct 0x2116980
[ERROR] (cccp - L:133) <NOHOST:NOPORT> Got I/O Error=0x17
[INFO] (cccp - L:110) Re-Issuing CCCP Command on server struct 0x2185e30
All just working fine again when I restart my Node.js app (Expressjs app).
This problem seem regularly happening
Please give me some suggestion what could be actually problems behind.
Thanks

Resources