Unable to run nw.js application in ubuntu - node.js

Myself created a node webkit helloworld application based on the tutorial given in this link. Also I tried to run the same in ubuntu OS using the command given in this link. But when I run the command nw /home/myUsername/Documents/myNodeWebkitApps/helloWorld/myApp.nw it throws the following results in terminal.
usage
nw [udp] <options> <host> <port>
Default TCP protocol can be changed to UDP by ``udp'' argument.
UDP options
currently none
TCP options
-f firewall mode, connection is initiated by netread.
Host specification is ignored and can be omited.
-c ignored. Transmission checksum is activated by
default.
-C algorithm use the specified algorithm for checksum. This
option also implies -c.
Supported algorithms (the first is default):
md5 none
general options
-i <file> read data from file instead of stdin.
-b print speed in b/s instead of B/s
-h <n> print `#' after each n KiB transferred (def. 10485.76).
-H <n> print `#' after each n MiB transferred (def. 10.24).
-q be quiet.
-v be verbose.
-vv be very verbose.
-V show version.
-vV show verbose version.
return values
0 no errors.
1 some error occured.
2 checksum validation failed.
How can I run the same as given by first link?

The output here did not come from nw.js but netrw which is installed on your machine. You can fix it by removing netrw from your machine or correcting the path to nw.js.

Finally, I managed to run the hello world application with the help of this link and this stackoverflow answer like
install nw builder by command npm install nw-builder -g
If you got the error something like /usr/bin/env: node: No such file or directory then as given in second link above do a symlink of node as ln -s /usr/bin/nodejs /usr/bin/node
Now we can run our application by the command nwbuild -r ~/Desktop/webkit-example

Related

Cygwin mount -p returns blank

I run a script that gets the cygdrive prefix by running mount -p. Usually this returns:
$mount -p
Prefix Type Flags
/cygdrive system binmode
Now the command returns the heading, but no cygdrive prefix, which looks like this:
$mount -p
Prefix Type Flags
The system is currently running without exhibiting any other problems, but my script is failing to determine the prefix. Where is this information stored, /etc/something?
I dug through bash_history and found that someone (its is a shared system) ran "umount -c", which removes the cygdrive prefix. The solution was to run "mount -c /cygdrive". This added /cygdrive back when running "mount -p".

can't find curl-config on NixOS

I'm trying to install vim-now-haskell on NixOS 17.09.
The installation fails with the exception
*** Installer requires 'curl-config'. Please install and try again.
*** Aborting...
Unfortunately Search NixOS packages doesn't list any package for curl-config. It also isn't in installed curl-7.56.1.
I can hardly believe NixOS doesn't provide curl-config. Therefore I'm looking here for help to find it.
It's in the curl.dev output.
$ ls $(nix-build --no-out-link '<nixpkgs>' -A curl.dev)/bin
curl-config
The reason for this is to keep the closure size of anything the uses curl small. Normally Nixpkgs' stdenv.mkDerivation takes care of those details when processing the buildInputs attribute, but if you're not using Nixpkgs to build something you may have to do a bit more manual work.
Another way to get the curl-config command is nix-shell -p curl, which launches a shell that has the command in its environment.
[user#feb:~]$ nix-shell -p curl
[nix-shell:~]$ curl-config --version
libcurl 7.65.3
[nix-shell:~]$ exit
[user#feb:~]$
NixPkgs also has a curlFull package which has more features enabled, as can be seen by the number of packages in the closure of the runtime library output:
$ nix-store -q --requisites $(nix-build --no-out-link '<nixpkgs>' -A curl.out) | wc -l
6
$ nix-store -q --requisites $(nix-build --no-out-link '<nixpkgs>' -A curlFull.out) | wc -l
29
So for a more capable build of curl, use curlFull.dev.

Installation of Cron in cygwin

When I run the following command in cygwin,
$ cygrunsrv -I cron -p C:\cygwin64\bin --args -n
I get the following error
cygrunsrv: Given path doesn't point to a valid executable
Why am I getting this error?
You only gave a folder and not a path to the executable. Besides this I wouldn't recommend to use windows paths in cygwin, this can cause errors. You should write /cygdrive/c/cygwin64/bin/something instead of C:\cygwin64\bin\something.exe
Perhaps you are looking for an
installation guide, and you would like to do something like this:
Install cron as a windows service, using cygrunsrv:
cygrunsrv -I cron -p /usr/sbin/cron -a -D
net start cron

Wget ANSI Error

I am trying to recursively download files from a specific website and I am encountering an error I've never seen before and which google comes up blank on. The command I'm entering is:
wget -m -p -E -k -K -np http://www.slac.stanford.edu/~timb/500/1f_3f_production/ae_1f/E0500-TDR_ws.Pae_ea.Gwhizard-1.95.eB.pL.I37470/
and the output is:
Conversion from 'ANSI_X3.4-1968' to 'ANSI_X3.4-1968' isn't supported
zsh: segmentation fault (core dumped) wget -m -p -E -k -K -np
The error seems to occur no matter what arguments I use. More strangely, it has no problems if I download each file in the directory individually. Does anybody have an idea what this error means?
Your exact command works for me on Windows 7 CMD line. I get 9 files. Google reports some problems with zsh. Can you try bash?

Docker Ignores limits.conf (trying to solve "too many open files" error)

I'm running a web server that is handling many thousands of concurrent web socket connections. For this to be possible, on Debian linux (my base image is google/debian:wheezy, running on GCE), where the default number of open files is set to 1000, I usually just set the ulimit to the desired number (64,000).
This works out great, except that when I dockerized my application and deployed it - I found out that docker kind of ignores the limit definitions. I have tried the following (all on the host machine, not on the container itself):
MAX=64000
sudo bash -c "echo \"* soft nofile $MAX\" >> /etc/security/limits.conf"
sudo bash -c "echo \"* hard nofile $MAX\" >> /etc/security/limits.conf"
sudo bash -c "echo \"ulimit -c $MAX\" >> /etc/profile"
ulimit -c $MAX
After doing some research I found that people were able to solve a similar issue by doing this:
sudo bash -c "echo \"limit nofile 262144 262144\" >> /etc/init/docker.conf"
and rebooting / restarting the docker service.
However, all of the above fail: I am getting the "too many open files" error when my app runs inside the container (doing the following without docker solves the problem).
I have tried to run ulimit -a inside the container to get an indication if the ulimit setup worked, but doing so throws an error about ulimit not being an executable that's a part of the PATH.
Anyone ran into this and/or can suggest a way to get docker to recognzie the limits?
I was able to mitgiate this issue with the following configuration :
I used ubuntu 14.04 linux for the docker machine and the host machine.
On the host machine You need to :
update the /etc/security/limits.conf to include :* - nofile 64000
add to your /etc/sysctl.conf : fs.file-max = 64000
restart sysctl : sudo sysctl -p
You can pass the limit as argument while running the container. That way you don't have to modify host's limits and give too much power to the container. Here is how:
docker run --ulimit nofile=5000:5000 <image-tag>

Resources