Deploy a NodeJS app to Heroku - node.js

I've followed the getting started guide to deploy a nodejs application to heroku:
I reached to this stage in the tutorial
When I try to write the command:
heroku create
It gives me this error:
UNABLE_TO_GET_ISSUER_CERT_LOCALLY: unable to get local issuer certificate
What can be the problem?

Try to do this:
Edit Git config text file (with my favorite line-ending neutral app like Notepad++) located at:
C:\Program Files (x86)\Git\etc\gitconfig
In the [http] block, add an option to disable sslVerify. It looked like this when I was done:
[http]
sslVerify = false
sslCAinfo = /bin/curl-ca-bundle.crt
The answer is related to this

The problem was that I have a web filter on my internet, A content filter and it was blocking the command
heroku create
from going through.

Run the following line:
npm config set strict-ssl false

For Windows 10:
Go to System variable (Windows logo key > type: "environment variables" > click Environment Variables button)
Check/Set Variable SSL_CERT_DIR=YourCetrFolder
Certificate folder Example:

Related

Google Cloud Run Second Flask Application - requirements.txt issue

I have a google cloud run flask application named "HelloWorld1" already up and running however i need to create a second flask application. I followed the below steps as per documentation:
1- On "Cloud Shell Editor" clicked "<>Cloud Code" --> "New Application" --> "Cloud Run Application Basic Cloud Run Application .."-->"Python (Flask): Cloud Run", provide and new folder and application is created.
2- When i try to run it using "Run on Cloud Run Emulator" i get the following error:
Starting to run the app using configuration 'Cloud Run: Run/Debug Locally' from .vscode/launch.json...
To view more detailed logs, go to Output channel : "Cloud Run: Run/Debug Locally - Detailed"
Dependency check started
Dependency check succeeded
Starting minikube, this may take a while...................................
minikube successfully started
The minikube profile 'cloud-run-dev-internal' has been scheduled to stop automatically after exiting Cloud Code. To disable this on future deployments, set autoStop to false in your launch configuration /home/mian/newapp/.vscode/launch.json
Update initiated
Update failed with error code DEVINIT_REGISTER_BUILD_DEPS
listing files: file pattern [requirements.txt] must match at least one file
Skaffold exited with code 1.
Cleaning up...
Finished clean up.
I tried following:
1- tried to create different type of application e.g django instead of flask however always getting the same error
2- tried to give full path of [requirements.txt] in docker settings, no luck.
Please if someone help me understanding why i am not able to run a second cloud run Flask app due to this error?
It's likely that your Dockerfile references the 'requirements.txt' file, but that file is not in your local directory. So, it gives the error that it's missing:
listing files: file pattern [requirements.txt] must match at least one file

NodeJS Google Vision is unable to detect a Project Id in the current environment

Under Ubuntu environment, NodeJS Google Vision complains:
Error: Unable to detect a Project Id in the current environment.
Even though I already put json credential through
$ export GOOGLE_APPLICATION_CREDENTIALS=/var/credential_google.json"
Please help.
As a quick hack you can try this :
$ GOOGLE_APPLICATION_CREDENTIALS="/var/credential_google.json" node app.js
It's not recommended to use a .json config file locally. I've seen these leak on production servers causing whole platforms to be deleted + the introduce environmental switching and security issues.
Setup Google Cloud CLI.
Now the server will 'look' at the local environment and use that.
If you get the error "Unable to detect a Project Id in the current environment.", it means the auth library cannot find the project default id.
You need to have a base project in Google Cloud set, regardless of environmental variables and project you're running.
Run
gcloud config set project [some-project-id]
Now if you run (node example)
"dev": "NODE_ENV=dev GCP_PROJECT=some-project-id nodemon index.ts",
It will load the project environment. This also allows you to deploy easier with:
"deploy:dev": "y | gcloud app deploy --project some-dev-project app.yaml",
"deploy:prod": "y | gcloud app deploy --project some-prod-project app.yaml"
App engine has security setup automatically with standard environments. With flex you can use one of the manage images Google Provides.
If you are usually a windows user and trying out Ubuntu (like me), the problem is likely with the assumptions that the export command exports variable to all terminal sessions and that you need to open a new terminal to get it to use (as expected in a windows terminal for an environment variable).
The export command doesn't export the variable to another terminal session. So if you export it in a terminal, you use it on the same terminal.
If you would like to export it permanently, then you can try the solution listed here
You can put the path to the JSON credentials directly when instantiating the client, by passing it as an argument.
For example:
const client = new speech.SpeechClient( {keyFilename: "credential_google.json"});
Also, for me setting it in the terminal didn't work.

(remote rejected) master -> master (pre-receive hook declined), Push rejected, failed to compile Node.js app

I know that there's couple posts like this one, but solution of any of them works for me.
Here is what I receive when I go for git push heroku master:
Please support. Ignoring node_modules is not working.
Docpad app - package.json file is updated according to docpad's manual.
I also have Procfile set up as in the link above.
PS. I have tried to deploy docpad app via openshift, but while Im going with manual from http://docpad.org/docs/deploy/ I receive error at step 5.
The application 'appname' is configured for git reference deployments but the
artifact provided ('https://github.com/myusername/appname#master') is a url.
Please provide a git reference to deploy (branch, tag or commit SHA1) or
configure your app to deploy from binaries with 'rhc configure-app appname
--deployment-type binary'.
If I configure myapp to deployment-type binary it isnt working neither.
The plugin which heroku tries to install returned 404 and the installation fails due to that.
Verify that the plugin is indeed public and not something you have wrote or used locally.
There is no package with this name hosted on the registry you use.
Remove the line with "docpad-plugin-blah": "2" from your package.json file. That line was provided in the docs simply to show you how to install plugins, but there's no such plugin as blah.
"dependencies": {
"docpad": "6"
},
I strongly recommend that you read through the Getting Started on OpenShift to get an overview of the development workflow using Git.
That being said, and if you really meant to use git reference deployments and you know why you are using them, then read through the Managing Deployments section on the developers page of OpenShift and find out how to properly set up git reference deployments. For instance, 'https://github.com/myusername/appname#master' is not a valid git url and therefore it cannot be cloned.

Azure Websites Git Deployment dropping "/" in SCM_BUILD_ARGS

Description
We are in a current project based on MVC4/Umbraco using Azure Websites to host it.
We are using SCM_BUILD_ARGS to change between different build setups depending on which site in Azure we deploy to (Test and Prod).
This is done by defining an app setting in the UI:
SCM_BUILD_ARGS = /p:Environment=Test
Earlier we used Bitbucket Integration to deploy and here this setting worked like a champ.
We have now switched to using Git Deployment, pushing the changes from our build server when tests have passed.
But when we do this, we get a lovely error.
"MSB1008: Only one project can be specified."
Trying to redeploy the same failed deployment from the UI on Azure works though.
After some trial and error I ended going into the deploy.cmd and outputting the %SCM_BUILD_ARGS% value in the script.
It looks like the / gets dropped from SCM_BUILD_ARGS but only when using Git deploy, not Bitbucket Integration or redeploy from UI.
Workaround
As workaround I have for now added a / to the deploy.cmd script in front of the %SCM_BUILD_ARGS%, but this of course breaks redeploy, since we then have //p:Environment=Test in the MSBuild command when the value of %SCM_BUILD_ARGS% has been inserted.
:: 2. Build to the temporary path
IF /I "%IN_PLACE_DEPLOYMENT%" NEQ "1" (
:: Added / to SCM_BUILD_ARGS
%MSBUILD_PATH% "%DEPLOYMENT_SOURCE%\www\www.csproj" [....] /%SCM_BUILD_ARGS%
) ELSE (
%MSBUILD_PATH% "%DEPLOYMENT_SOURCE%\www\www.csproj" [....] /%SCM_BUILD_ARGS%
)
Question
Anyone know of a better solution for this problem or is it possibly a bug in Kudu?
We would love to have both deploy from Git and Redeploy working.
Could you try changing from "/" to "-"? For instance, AppSettings from /p:Environment=Test to -p:Environment=Test, see if it helps.
-p:Environment=Test did not work for me, the setting which worked for me at the time of this writing (September 2015) was
-p:Configuration=Test
There is clearly a Kudu bug in there, and you should open an issue on https://github.com/projectkudu/kudu. But for now, I can give you a workaround.
Instead of using an App Setting, include a .deployment file at the root of your repo, containing:
[config]
SCM_BUILD_ARGS = /p:Environment=Test
I think this will work in all cases. I suspect the bug has to do with bash messing up the environment in post receive hook scenarios, which only apply to direct git push but not to Bitbucket and Redeploy scenarios.
UPDATE: In fact, it's easy to see such weird bash behavior. Try this:
Open cmd.exe
Run: set foo=/abc to set a variable
Run bash
From bash, run cmd to launch a new cmd on top of bash (so cmd -> bash -> cmd)
Run set foo to get the value of foo
Result:
FOO=C:/Program Files (x86)/git/abc
So the value gets completely messed up. The key also gets upper cases, though that's mostly harmless. Strange stuff...

perforce web client throws error "P4CHARSET must be set in order to connect to a unicode server"

I use perforce as sourcesafe, my desktop client is working fine.
I just install P4Web, perforce web client and when I try to open a file I get the following error:
P4CHARSET must be set in order to connect to a unicode server
I already add P4CHARSET registry key set to utf8 under
HKEY_CURRENT_USER\Software\perforce\environment
What more can I do?
Thanks!
You probably need to set it for the P4Web service. Check out this KB article:
http://kb.perforce.com/article/231/p4web-as-a-windows-service
There are examples of setting other parameters like the port number; you can follow those guidelines for setting P4CHARSET.
you need to run the following line on cmd:
p4 set -S "Perforce Web" P4CHARSET=utf8

Resources