I am deploying a web app for the first time, I am following this here - https://learn.microsoft.com/en-us/azure/developer/javascript/tutorial-vscode-azure-app-service-node-03
In the example it uses npm start, but I have been using node app.js to start my application locally. Also my code is using 127.0.0.1, do I change this to the created URL? When I deployed it, I went to the azure URL and got - azurewebsites.net is currently unable to handle this request.
HTTP ERROR 500.
Thank you for any hep!
var config = {
database: {
host: 'db1.mysql.database.azure.com',
user: 'user',
password: 'password',
port: 3306,
db: 'db1'
},
server: {
host: '127.0.0.1',
port: '3000'
}
}
module.exports = config
UPDATE
You can deploy your webapp by git. I have create a new demo for you. You need change mysql info.
PRIVIOUS
I use vscode, follow the official documentation, no need to modify the port, just follow the steps to directly publish it. I suggest you use linux, it will reduce a lot of problems when creating node web app.
You can read the official documentation carefully. The demo I provided can be downloaded and run. The demo supports connection to mysql.
This screenshot indicates successful release.
I am not familiar with express, but it is normal when debugging. This screenshot is consistent with my local operation. Our focus is on publishing and connecting to mysql. Please see the screenshot of local operation below.
Related
Recently I've been learning google cloud sql it took a little but I was able to connect my cloud sql auth proxy to my postgres client. However I'm not sure how to query or make post request to my cloud sql. Originally I was just doing
const Pool = require("pg").Pool;
const pool = new Pool({
user: "postgres",
password: "****",
host: "localhost",
port: 5432,
database: "somedb"
});
I'm not sure how to convert this over to try and query the cloud sql db. I did try converting it and got.
const Pool = require("pg").Pool;
const pool = new Pool({
user: "postgres",
password: "****",
host: "[cloud sql ip]",
port: 5432,
database: "[pg/gc db]"
});
I end up getting the error [pg_hba.conf rejects connection for host "[ipv4 ip]", user "postgres", database "[pg/gc db]", no encryption]. I know that the documentation has a code sample but I don't understand it and cant really find any resources on explaining it.
Edit: I am uploading files to a bucket in cloud storage which I was successfully able to do. I plan on mapping out all these files onto a webpage. However I would like to filter them by certain features so I am making a second request after I store the file. My second request will store attributes into a database that I can then relate to the files for filtering.
If you're running the auth proxy from your local machine where you're running your application, then the code will be the same from your application's code perspective. You'll still connect to localhost (although you may need to connect to 127.0.0.1 depending on how you have hosts set up on the machine).
The database field will depend on how you've set up the database in Cloud SQL, but it should be the same as your local database. E.g. if you created a database named "somedb" in Cloud SQL, you don't have to change anything to connect to it in Cloud SQL. The proxy running locally will make everything behave as if you're running the database locally from the application's perspective.
Edit: This particular answer wasn't the issue they were having, but in the comments it came up that both the Proxy and SSL-only was being used, which is (generally) less recommended as it doubles up the SSL/TLS usage because the Proxy also uses generated SSL certificates to connect to Cloud SQL so the database-level SSL connectivity is a redundancy that's likely not needed. There are some edge cases where you may want both, but broadly speaking one or the other is recommended.
TypeError [ERR_INVALID_URL]: Invalid URL:
mssql://testuser:testuser#192.168.1.70\SQLEXPRESS/MyTestDB
The node package is: https://www.npmjs.com/package/mssql
As per their Quick Example
await sql.connect('mssql://username:password#localhost/database')
I am doing this
await sql.connect('mssql://testuser:testuser#192.168.1.70\\SQLEXPRESS/MyTestDB');
My details are:
SQL Server instance: 192.168.1.70\SQLEXPRESS
DB name: MyTestDB
uid/pwd: testuser, testuser
I have no issues connecting from SQL Server Management Studio from a remote machine, so what is the problem in this code?
json object works, not plain string !
let config = {
user: 'testuser',
password: 'testuser',
server: '192.168.1.70\\SQLEXPRESS',
database: 'MyTestDB'
}
Update:
From a github issue it became clearer the Quick Example format is a Uri, not a connection string.
So instead of:
'mssql://testuser:testuser#192.168.1.70\\SQLEXPRESS/MyTestDB'
I should use the below, which I've since confirmed also works!
'mssql://testuser:testuser#192.168.1.70/SQLEXPRESS/MyTestDB'
I have been researching this for day and I haven't been able to find the way to do this.
I am building a react app, running express at the backend, that needs to access some data in a remote database that lives inside a VPN. At the moment the app lives on my localhost so its enough for me to connect my machine using openvpn client and everything works a beauty. The problem will rise when the app will be live and I will need it to have access to the vpn by (I'm guessing) having a vpn client running on the site/domain.
Has anyone done this before?
I have tried to install the node-openvpn package that seems could do the job but unfortunately I can't manage to make it work as the connection doesn't seem to be configured properly.
This is the function I call to connect to the vpn that systematically fails at the line
--> openvpnmanager.authorize(auth);
const openvpnmanager = require('node-openvpn');
...
const connectToVpn = () => {
var opts = {
host: 'wopr.remotedbserver.com',
port: 1337, //port openvpn management console
timeout: 1500, //timeout for connection - optional,
logpath: '/log.txt'
};
var auth = {
user: 'userName',
pass: 'passWord',
};
var openvpn = openvpnmanager.connect(opts);
openvpn.on('connected', function() {
console.log('connecting..');
openvpnmanager.authorize(auth); <-- Error: Unhandled "error" event. (Cannot connect)
});
openvpn.on('console-output', function(output) {
console.log(output)
});
openvpn.on('state-change', function(state) { //emits console output of openvpn state as a array
console.log(output)
});
};
Am I misusing this function? Is there a better way?
Any help will be extremely appreciated.
Thank You!
The problem will rise when the app will be live and I will need it to
have access to the vpn by (I'm guessing) having a OpenVPN client running
on the site/domain.
Thats correct, you will need an openvpn client instance on the server where you will run the backend.
The above library (node-openvpn) is simply a library to interact with the local OpenVPN client instance. It cannot create a connection on its own. It depends on the OpenVPN binary (which should be running).
The solution you need is simply run the OpenVPN client on your server (apt-get openvpn). And let the daemon run. Check out the references below.
node-openvpn issues that points out that a running instance of the client is needed
OpenVPN CLI tutorial
When I tried to use adapter: 'redis' it told me to install socket.io-redis version 0.14. I did that and have entered in all the info into the session.js file:
module.exports.session = {
adapter: 'socket.io-redis'
host: '10...',
port: 6379,
db: 'sails',
pass: '...',
};
And now when I try run the application I get the following error:
Error: Ready check failed: NOAUTH Authentication required.
I'm not sure why pass: .. is not working? Anything else I should do?
Note: I am using a Google compute instance for redis hosting, I have a firewall rule for allowing access.
I did find a solution for my problem. I am not sure how useful it will be for you, since I believe my situation was a little different. I am using sails.js on a bitnami Google cloud compute instance and I am also hosting redis on a seperate bitnami instance, which we have in common. However, I am trying to connect to Redis for the use of Kue. So I did not make any use of my config/session file. We still got the same error though, so the solution was to remove the requiredpass in the redis instance and then with the firewall rules, only allow my server to access the Redis instance.
I believe the root issue is that redis has a second password prompt for any attempt to read/write to data store. So passing the password from the server only logs you in but does not give you access to the data, hense the NOAUTH error. So I believe the requiredpass for the instance is mainly for the client side and server side instances don't need it. This might be me being naive on how to use Redis, but I do not know how else to enter the password to the prompt from a different server. I feel the firewall rules are fine for me for now to keep unwanted traffic out.
If this is what you want to do/try, then the way I did this for Google cloud was ssh into the Redis instance (through your own command line or through the browser one that Google provides. Then edit the /opt/bitnami/redis/etc/redis.conf file with sudo privileges. Find the line that says requiredpass and comment that out by placing a # in front of it. Now to get this to take affect, then you have to restart the server.
Bitnami says you can just do this with sudo /opt/bitnami/ctlscript.sh restart redis. However, I was getting an AUTH error. So in order to get around that, I had to force kill the proccess with sudo pkill -9 -f redis-server, then restart it with sudo /opt/bitnami/ctlscript.sh restart redis. This should refresh the config file, update the instance and allow your server to connect without requiring a second prompt password to be entered.
If you have any questions, please let me know and I will try to help as much as possible.
You have to specify auth_pass:
module.exports.session = {
adapter: 'socket.io-redis'
host: '10...',
port: 6379,
db: 'sails',
pass: '...',
auth_pass: redis_url.auth.split(":")[1]
};
UPDATE
From documentation:
password: null; If set, client will run redis auth command on connect.
Alias auth_pass (node_redis < 2.5 have to use auth_pass)
I'm trying to connect my reveal.js client and master presentations together using socket.io server.
I did all stuff, that Hakim Se describes on his github page, but socket.io still produces an error, while trying to connect to server.
GET http://0.0.7.156:8080/socket.io/1/?t=1393864538446 net::ERR_ADDRESS_UNREACHABLE
If change 0.0.7.156 to my local machine name, query succeds.
I think I has wrong settings of presentations, but could not understand how to fix them.
Client:
multiplex: {
secret: null,
id: 'a9e10bc1b02efafe',
url: 'localname:1948'
},
Master:
multiplex: {
secret: '13938623264068002486',
id: 'a9e10bc1b02efafe',
url: 'localname:1948'
},
My experience has been that if I use localhost:
url: 'localhost:1948',
Something inside of reveal.js breaks and transforms localhost internally to the "nonsense" address 0.0.7.156 (bonus points to whoever digs into the code and finds the reason for this, I sadly currently do not have the time). If I instead use 127.0.0.1:
url: '127.0.0.1:1948',
Everything works just fine, at least locally for testing.
You will need to change 127.0.0.1 to whatever host name you will use when deploying the slides later on though.
Hope this helps!