In Chrome Extension manifest v3 remotely hosted code is no longer allowed. The migration documentation has two solutions.
Configuration-driven features and logic—In this approach, your extension loads a remote configuration (for example a JSON file) at runtime and caches the configuration locally. The extension then uses this cached configuration to decide which features to enable.
Externalize logic with a remote service—Consider migrating application logic from the extension to a remote web service that your extension can call. (Essentially a form of message passing.) This provides you the ability to keep code private and change the code on demand while avoiding the extra overhead of resubmitting to the Chrome Web Store.
Has anyone seen any samples of these two types of configurations?
At first I was trying to mess with the CSP given this error, but now don't think that is even possible so the error is a bit misleading in the first place.
Refused to load the script 'https://code.jquery.com/jquery-3.5.1.slim.min.js' because it violates the following Content Security Policy directive: "script-src 'self'". Note that 'script-src-elem' was not explicitly set, so 'script-src' is used as a fallback.
This is my CSP policy I was trying to get working and now doubtful will work.
"content_security_policy": {
"script-src": "'self' 'unsafe-eval' https://unpkg.com https://code.jquery.com https://stackpath.bootstrapcdn.com https://cdn.jsdelivr.net/ https://cdn.datatables.net https://cdnjs.cloudflare.com https://cdn.rawgit.com; object-src 'self'"
},
There is a really good stack overflow thread on this, but it is for manifest v2.
Thanks
I was able to get this working and now the extension doesn't require any external source once it's built. It took a while and there were no answers here so I'm posting my solution here.
I created this shell script to download the external packages to a folder called third party.
parts of package.json
scripts {
"build_myapp": "npm install && npm run build_prod",
"build_prod": "./getexternalscripts.sh & npm run build_myapp",
"tar2zip": "tarball=$(npm list --depth 0 | sed 's/#/-/g; s/ .*/.tgz/g; 1q;'); tar -tf $tarball | sed 's/^package\\///' | zip -#r package; rm $tarball",
"package": "npm run build_prod && npm pack && npm run tar2zip && mv package.zip myapp-$npm_package_version.zip"
}
getexternalscripts.sh
#!/bin/bash
wget -N https://code.jquery.com/jquery-3.5.1.slim.min.js -P ./thirdparty/
wget -N https://cdn.jsdelivr.net/npm/popper.js#1.16.0/dist/umd/popper.min.js -P ./thirdparty/
wget -N https://stackpath.bootstrapcdn.com/bootstrap/4.5.0/js/bootstrap.min.js -P ./thirdparty/
wget -N https://cdn.jsdelivr.net/npm/axios/dist/axios.min.js -P ./thirdparty/
When referencing these in my html files it looks like this
<head>
<script src="/thirdparty/jquery-3.5.1.slim.min.js" integrity="sha384-DfXdz2htPH0lsSSs5nCTpuj/zy4C+OGpamoFVy38MVBnE+IbbVYUew+OrCXaRkfj"></script>
<script src="/thirdparty/popper.min.js" integrity="sha384-Q6E9RHvbIyZFJoft+2mJbHaEWldlvI9IOYy5n3zV9zzTtmI3UksdQRVvoxMfooAo"></script>
<script src="/thirdparty/bootstrap.min.js" integrity="sha384-OgVRvuATP1z7JjHLkuOU7Xw704+h835Lr+6QL9UvYjZE3Ipu6Tp75j7Bh/kR0JKI"></script>
<script src="/thirdparty/axios.min.js"></script>
</head>
Related
I am trying to generate an HTML report for Postman scripts using Newman.
However, I don't see a report generated in the desired location.
I am using azure DevOps.
[command]/usr/local/bin/newman run /home/vsts/work/1/s/Postman/postman_collection.json --reporter-html-template /home/vsts/work/1/s/Postman/newman-reporter.html --reporter-html-export /home/vsts/work/1/s/Postman -r cli,html -n 1 -e /home/vsts/work/1/s/Postman/postman_environment.json
I have already installed Newman and HTML report generator on the same level as far as directory is concerned.
You will also need to specify a filename extension (.html) rather than just the directory path.
As a fallback, it might have created the default file in a directory called newman at the same location that the command was run.
If you required a different HTML reporter - this can be used.
I am running a Node.js app on Google App Engine, using the following command to deploy my code:
gcloud app deploy --stop-previous-version
My desired behavior is for all instances running previous versions to be terminated, but they always seem to stick around. Is there something I'm missing?
I realize they are not receiving traffic, but I am still paying for them and they cause some background telemetry noise. Is there a better way of running this command?
Example output of the gcloud app instances list:
As you can see I have two different versions running.
We accidentally blew through our free Google App Engine credit in less than 30 days because of an errant flexible instance that wasn't cleared by subsequent deployments. When we pinpointed it as the cause it had scaled up to four simultaneous instances that were basically idling away.
tl;dr: Use the --version flag when deploying to specify a version name. An existing instance with the same version will be
replaced then next time you deploy.
That led me down the rabbit hole that is --stop-previous-version. Here's what I've found out so far:
--stop-previous-version doesn't seem to be supported anymore. It's mentioned under Flags on the gcloud app deploy reference page, but if you look at the top of the page where all the flags are listed, it's nowhere to be found.
I tried deploying with that flag set to see what would happen but it seemingly had no effect. A new version was still created, and I still had to go in and manually delete the old instance.
There's an open Github issue on the gcloud-maven-plugin repo that specifically calls this out as an issue with that plugin but the issue has been seemingly ignored.
At this point our best bet at this point is to add --version=staging or whatever to gcloud deploy app. The reference docs for that flag seem to indicate that that it'll replace an existing instance that shares that "version":
--version=VERSION, -v VERSION
The version of the app that will be created or replaced by this deployment. If you do not specify a version, one will be generated for you.
(emphasis mine)
Additionally, Google's own reference documentation on app.yaml (the link's for the Python docs but it's still relevant) specifically calls out the --version flag as the "preferred" way to specify a version when deploying:
The recommended approach is to remove the version element from your app.yaml file and instead, use a command-line flag to specify your version ID
As far as I can tell, for Standard Environment with automatic scaling at least, it is normal for old versions to remain "serving", though they should hopefully have zero instances (even if your scaling configuration specifies a nonzero minimum). At least that's what I've seen. I think (I hope) that those old "serving" instances won't result in any charges, since billing is per instance.
I know most of the above answers are for Flexible Environment, but I thought I'd include this here for people who are wondering.
(And it would be great if someone from Google could confirm.)
I had same problem as OP. Using the flex environment (some of this also applies to standard environment) with Docker (runtime: custom in app.yaml) I've finally solved this! I tried a lot of things and I'm not sure which one fixed it (or whether it was a combination) so I'll list the things I did here, the most likely solutions being listed first.
SOLUTION 1) Ensure that cloud storage deletes old versions
What does cloud storage have to do with anything? (I hear you ask)
Well there's a little tooltip (Google Cloud Platform Web UI (GCP) > App Engine > Versions > Size) that when you hover over it says:
(Google App Engine) Flexible environment code is stored and billed from Google Cloud Storage ... yada yada yada
So based on this info and this answer I visited GCP > Cloud Storage > Browser and found my storage bucket AND a load of other storage buckets I didn't know existed. It turns out that some of the buckets store cached cloud functions code, some store cached docker images and some store other cached code/stuff (you can tell which is which by browsing the buckets).
So I added a deletion policy to all the buckets (except the cloud functions bucket) as follows:
Go to GCP > Cloud Storage > Browser and click the link (for the relevant bucket) in the Lifecycle Rules column > Click ADD A RULE > THEN:
For SELECT ACTION choose "Delete Object" and click continue
For SELECT OBJECT choose "Number of newer versions" and enter 1 in the input
Click CREATE
This will return you to the table view and you should now see the rule in the lifecycle rules column.
REPEAT this process for all relevant buckets (the relevant buckets were described earlier).
THEN delete the contents of the relevant buckets. WARNING: Some buckets warn you NOT to delete the bucket itself, only the contents!
Now re-deploy and your latest version should now get deployed and hopefully you will never have this problem again!
SOLUTION 2) Use deploy flags
I added these flags
gcloud app deploy --quiet --promote --stop-previous-version
This probably doesn't help since these flags seem to be the default but worth adding just in case.
Note that for the standard environment only (I heard on the grapevine) you can also use the --no-cache flag which might help but with flex, this flag caused the deployment to fail (when I tried).
SOLUTION 3)
This probably does not help at all, but I added:
COPY app.yaml .
to the Dockerfile
TIP 1)
This is probably more of a helpful / useful debug approach than a fix.
Visit GCP > App Engine > Versions
This shows all versions of your app (1 per deployment) and it also shows which version each instance is running (instances are configured in app.yaml).
Make sure all instances are running the latest version. This should happen by default. Probably worth deleting old versions.
You can determine your version from the gcloud app deploy logs (at the start of the logs) but it seems that the versions are listed by order of deployment anyway (most recent at top).
TIP 2)
Visit GCP > App Engine > Instances
SSH into an instance. This is just a matter of clicking a few buttons (see screenshot below). Once you have SSH'd in run:
docker exec -it gaeapp /bin/bash
Which will get you into the docker container running your code. Now you can browse around to make sure it has your latest code.
Well I think my answer is long enough now. If this helps, don't thank me, J-ES-US is the one you should thank ;) I belong to Him ^^
Google may have updated their documentation cited in #IAmKale's answer
Note that if the version is running on an instance of an auto-scaled service, using --stop-previous-version will not work and the previous version will continue to run because auto-scaled service instances are always running.
Seems like that flag only works with manually scaled services.
This is a supplementary and optional answer in addition to my other main answer.
I am now, in addition to my other answer, auto incrementing version manually on deploy using a script.
My script contents are below.
Basically, the script auto increments version every time you deploy. I am using node.js so the script uses npm version to bump the version but this line could easily be tweaked to whatever language you use.
The script requires a clean git working directory for deployment.
The script assumes that when the version is bumped, this will result in file changes (e.g. changes to package.json version) that need pushing.
The script essentially tries to find your SSH key and if it finds it then it starts an SSH agent and uses your SSH key to git commit and git push the file changes. Else it just does a git commit without a push.
It then does a deploy using the --version flag ... --version="${deployVer}"
Thought this might help someone, especially since the top answer talks a lot about using the --version flag on a deploy.
#!/usr/bin/env bash
projectName="vehicle-damage-inspector-app-engine"
# Find SSH key
sshFile1=~/.ssh/id_ed25519
sshFile2=~/Desktop/.ssh/id_ed25519
sshFile3=~/.ssh/id_rsa
sshFile4=~/Desktop/.ssh/id_rsa
if [ -f "${sshFile1}" ]; then
sshFile="${sshFile1}"
elif [ -f "${sshFile2}" ]; then
sshFile="${sshFile2}"
elif [ -f "${sshFile3}" ]; then
sshFile="${sshFile3}"
elif [ -f "${sshFile4}" ]; then
sshFile="${sshFile4}"
fi
# If SSH key found then fire up SSH agent
if [ -n "${sshFile}" ]; then
pub=$(cat "${sshFile}.pub")
for i in ${pub}; do email="${i}"; done
name="Auto Deploy ${projectName}"
git config --global user.email "${email}"
git config --global user.name "${name}"
echo "Git SSH key = ${sshFile}"
echo "Git email = ${email}"
echo "Git name = ${name}"
eval "$(ssh-agent -s)"
ssh-add "${sshFile}" &>/dev/null
sshKeyAdded=true
fi
# Bump version and git commit (and git push if SSH key added) and deploy
if [ -z "$(git status --porcelain)" ]; then
echo "Working directory clean"
echo "Bumping patch version"
ver=$(npm version patch --no-git-tag-version)
git add -A
git commit -m "${projectName} version ${ver}"
if [ -n "${sshKeyAdded}" ]; then
echo ">>>>> Bumped patch version to ${ver} with git commit and git push"
git push
else
echo ">>>>> Bumped patch version to ${ver} with git commit only, please git push manually"
fi
deployVer="${ver//"."/"-"}"
gcloud app deploy --quiet --promote --stop-previous-version --version="${deployVer}"
else
echo "Working directory unclean, please commit changes"
fi
For node.js users if you call the script deploy.sh you should add:
"deploy": "sh deploy.sh"
In your package.json scripts and deploy with npm run deploy
We have been using RequireJS Optimizer in our deployment file for months now. Everything was working.
Basically what we are doing is the following:
We compile our Coffee scripts files
node %DEPLOYMENT_SOURCE%\build\node_modules\coffee-script\bin\coffee -co "%DEPLOYMENT_SOURCE%\src\Web\Scripts" "%DEPLOYMENT_SOURCE%\src\Web\Coffee"
Then we opitmize the js files into a single file using RequireJS:
node "%DEPLOYMENT_SOURCE%\build\node_modules\requirejs\bin\r.js" -o "%DEPLOYMENT_SOURCE%\build\build.js"
Locally, it works #1. But on Azure we have this error:
Cannot optimize network URL, skipping: //1.1.1.1.1/path/site/repository/src/Web/Scripts/dashboard/almond.js
Please note that I changed the IP and the paths.
I checked the documentation of RequireJS, and in fact it does not support network URLs when optimizing.
The same script was working a few days ago.
Anybody aware of changes in Azure that could have lead to this? Any workaround?
Thanks a lot
I made it working by overriding the baseUrl when calling r.js:
node "%DEPLOYMENT_SOURCE%\build\node_modules\requirejs\bin\r.js" -o "%DEPLOYMENT_SOURCE%\build\build.js" baseUrl=%DEPLOYMENT_SOURCE%\src\Web\scripts
I have a node.js project that compiles less files to css when I start the app. I do this by modifying the start script in package.json like so:
{
// omitted for brevity
start: { lessc public/stylesheets/styles.less > public/stylesheets/styles.css; node app.js; }
}
This works nicely locally, but not at all on my Windows Azure instance. Either because less needs to be installed globally on the machine for this to work, or because Azure doesn't run npm start. Or both. Either way, I need another solution!
I thought custom deployments was the answer (I'm using git remote deployment) and I tried modifying the deploy.cmd to include
call "lessc public/stylesheets/styles.less > public/stylesheets/styles.css;"
No joy. I even tried
call "%SITE_ROOT%/node_modules/less/bin/lessc %SITE_ROOT%/public/stylesheets/styles.less > %SITE_ROOT%/public/stylesheets/styles.css;
Am I coming at this the wrong way? How can I keep the compiled css files out of my source control and compile them on the server after deployment to Azure?
Thanks!
OK, I finally have this going, I think.
For some reason, even though the physical file is on the disk (I can see them with my FTP client), Azure is not letting me run lessc in the \node_modules\less\bin folder, but it does let me run the version in the \node_modules\.bin folder.
In the end, I added the following lines to my deploy.cmd file, and it worked!
IF NOT DEFINED LESS_COMPILER (
SET LESS_COMPILER=%DEPLOYMENT_TARGET%\node_modules\.bin\lessc
)
call %LESS_COMPILER% %DEPLOYMENT_TARGET%\public\stylesheets\styles.less > %DEPLOYMENT_TARGET%\public\stylesheets\styles.css
I am trying to deploy my app to Heroku however I rely on using some private git repos as modules. I do this for code reuse between projects, e.g. I have a custom logger I use in multiple apps.
"logger":"git+ssh://git#bitbucket.org..............#master"
The problem is Heroku obviously does not have ssh access to this code. I can't find anything on this problem. Ideally Heroku have a public key I can can just add to the modules.
Basic auth
GitHub has support for basic auth:
"dependencies" : {
"my-module" : "git+https://my_username:my_password#github.com/my_github_account/my_repo.git"
}
As does BitBucket:
"dependencies" : {
"my-module": "git+https://my_username:my_password#bitbucket.org/my_bitbucket_account/my_repo.git"
}
But having plain passwords in your package.json is probably not desired.
Personal access tokens (GitHub)
To make this answer more up-to-date, I would now suggest using a personal access token on GitHub instead of username/password combo.
You should now use:
"dependencies" : {
"my-module" : "git+https://<username>:<token>#github.com/my_github_account/my_repo.git"
}
For Github you can generate a new token here:
https://github.com/settings/tokens
App passwords (Bitbucket)
App passwords are primarily intended as a way to provide compatibility with apps that don't support two-factor authentication, and you can use them for this purpose as well. First, create an app password, then specify your dependency like this:
"dependencies" : {
"my-module": "git+https://<username>:<app-password>#bitbucket.org/my_bitbucket_account/my_repo.git"
}
[Deprecated] API key for teams (Bitbucket)
For BitBucket you can generate an API Key on the Manage Team page and then use this URL:
"dependencies" : {
"my-module" : "git+https://<teamname>:<api-key>#bitbucket.org/team_name/repo_name.git"
}
Update 2016-03-26
The method described no longer works if you are using npm3, since npm3 fetches all modules described in package.json before running the preinstall script. This has been confirmed as a bug.
The official node.js Heroku buildpack now includes heroku-prebuild and heroku-postbuild, which will be run before and after npm install respectively. You should use these scripts instead of preinstall and postinstall in all cases, to support both npm2 and npm3.
In other words, your package.json should resemble:
"scripts": {
"heroku-prebuild": "bash preinstall.sh",
"heroku-postbuild": "bash postinstall.sh"
}
I've come up with an alternative to Michael's answer, retaining the (IMO) favourable requirement of keeping your credentials out of source control, whilst not requiring a custom buildpack. This was borne out of frustration that the buildpack linked by Michael is rather out of date.
The solution is to setup and tear down the SSH environment in npm's preinstall and postinstall scripts, instead of in the buildpack.
Follow these instructions:
Create two scripts in your repo, let's call them preinstall.sh and postinstall.sh.
Make them executable (chmod +x *.sh).
Add the following to preinstall.sh:
#!/bin/bash
# Generates an SSH config file for connections if a config var exists.
if [ "$GIT_SSH_KEY" != "" ]; then
echo "Detected SSH key for git. Adding SSH config" >&1
echo "" >&1
# Ensure we have an ssh folder
if [ ! -d ~/.ssh ]; then
mkdir -p ~/.ssh
chmod 700 ~/.ssh
fi
# Load the private key into a file.
echo $GIT_SSH_KEY | base64 --decode > ~/.ssh/deploy_key
# Change the permissions on the file to
# be read-only for this user.
chmod 400 ~/.ssh/deploy_key
# Setup the ssh config file.
echo -e "Host github.com\n"\
" IdentityFile ~/.ssh/deploy_key\n"\
" IdentitiesOnly yes\n"\
" UserKnownHostsFile=/dev/null\n"\
" StrictHostKeyChecking no"\
> ~/.ssh/config
fi
Add the following to postinstall.sh:
#!/bin/bash
if [ "$GIT_SSH_KEY" != "" ]; then
echo "Cleaning up SSH config" >&1
echo "" >&1
# Now that npm has finished running, we shouldn't need the ssh key/config anymore.
# Remove the files that we created.
rm -f ~/.ssh/config
rm -f ~/.ssh/deploy_key
# Clear that sensitive key data from the environment
export GIT_SSH_KEY=0
fi
Add the following to your package.json:
"scripts": {
"preinstall": "bash preinstall.sh",
"postinstall": "bash postinstall.sh"
}
Generate a private/public key pair using ssh-agent.
Add the public key as a deploy key on Github.
Create a base64 encoded version of your private key, and set it as the Heroku config var GIT_SSH_KEY.
Commit and push your app to Github.
When Heroku builds your app, before npm installs your dependencies, the preinstall.sh script is run. This creates a private key file from the decoded contents of the GIT_SSH_KEY environment variable, and creates an SSH config file to tell SSH to use this file when connecting to github.com. (If you are connecting to Bitbucket instead, then update the Host entry in preinstall.sh to bitbucket.org). npm then installs the modules using this SSH config. After installation, the private key is removed and the config is wiped.
This allows Heroku to pull down your private modules via SSH, while keeping the private key out of the codebase. If your private key becomes compromised, since it is just one half of a deploy key, you can revoke the public key in GitHub and regenerate the keypair.
As an aside, since GitHub deploy keys have read/write permissions, if you are hosting the module in a GitHub organization, you can instead create a read-only team and assign a 'deploy' user to it. The deploy user can then be configured with the public half of the keypair. This adds an extra layer of security to your module.
It's a REALLY bad idea to have plain text passwords in your git repo, using an access token is better, but you will still want to be super careful.
"my_module": "git+https://ACCESS_TOKEN:x-oauth-basic#github.com/me/my_module.git"
I created a custom nodeJS buildpack that will allow you to specify an SSH key that is registered with ssh-agent and used by npm when dynos are first setup. It seamlessly allows you to specify your module as an ssh url in your package.json like shown:
"private_module": "git+ssh://git#github.com:me/my_module.git"
To setup your app to use your private key:
Generate a key: ssh-keygen -t rsa -C "your_email#example.com" (Enter no passphrase. The buildpack does not support keys with passphrases)
Add the public key to github: pbcopy < ~/.ssh/id_rsa.pub (in OS X) and paste the results into the github admin
Add the private key to your heroku app's config: cat id_rsa | base64 | pbcopy, then heroku config:set GIT_SSH_KEY=<paste_here> --app your-app-name
Setup your app to use the buildpack as described in the heroku nodeJS buildpack README included in the project. In summary the simplest way is to set a special config value with heroku config:set to the github url of the repository containing the desired buildpack. I'd recommend forking my version and linking to your own github fork, as I'm not promising to not change my buildpack.
My custom buildpack can be found here: https://github.com/thirdiron/heroku-buildpack-nodejs and it works for my system. Comments and pull requests are more than welcome.
Based on the answer from #fiznool I created a buildpack to solve this problem using a custom ssh key stored as an environment variable. As the buildpack is technology agnostic, it can be used to download dependencies using any tool like composer for php, bundler for ruby, npm for javascript, etc: https://github.com/simon0191/custom-ssh-key-buildpack
Add the buildpack to your app:
$ heroku buildpacks:add --index 1 https://github.com/simon0191/custom-ssh-key-buildpack
Generate a new SSH key without passphrase (lets say you named it deploy_key)
Add the public key to your private repository account. For example:
Github: https://help.github.com/articles/adding-a-new-ssh-key-to-your-github-account/
Bitbucket: https://confluence.atlassian.com/bitbucket/add-an-ssh-key-to-an-account-302811853.html
Encode the private key as a base64 string and add it as the CUSTOM_SSH_KEY environment variable of the heroku app.
Make a comma separated list of the hosts for which the ssh key should be used and add it as the CUSTOM_SSH_KEY_HOSTS environment variable of the heroku app.
# MacOS
$ heroku config:set CUSTOM_SSH_KEY=$(base64 --input ~/.ssh/deploy_key) CUSTOM_SSH_KEY_HOSTS=bitbucket.org,github.com
# Ubuntu
$ heroku config:set CUSTOM_SSH_KEY=$(base64 ~/.ssh/deploy_key) CUSTOM_SSH_KEY_HOSTS=bitbucket.org,github.com
Deploy your app and enjoy :)
I was able to setup resolving of Github private repositories in Heroku build via Personal access tokens.
Generate Github access token here: https://github.com/settings/tokens
Set access token as Heroku config var: heroku config:set GITHUB_TOKEN=<paste_here> --app your-app-name or via Heroku Dashboard
Add heroku-prebuild.sh script:
#!/bin/bash
if [ "$GITHUB_TOKEN" != "" ]; then
echo "Detected GITHUB_TOKEN. Setting git config to use the security token" >&1
git config --global url."https://${GITHUB_TOKEN}#github.com/".insteadOf git#github.com:
fi
add the prebuild script to package.json:
"scripts": {
"heroku-prebuild": "bash heroku-prebuild.sh"
}
For local environment we can also use git config ... or we can add the access token to ~/.netrc file:
machine github.com
login PASTE_GITHUB_USERNAME_HERE
password PASTE_GITHUB_TOKEN_HERE
and installing private github repos should work.
npm install OWNER/REPO --save will appear in package.json as: "REPO": "github:OWNER/REPO"
and resolving private repos in Heroku build should also work.
optionally you can setup a postbuild script to unset the GITHUB_TOKEN.
This answer is good https://stackoverflow.com/a/29677091/6135922, but I changed a little bit preinstall script. Hope this will help someone.
#!/bin/bash
# Generates an SSH config file for connections if a config var exists.
echo "Preinstall"
if [ "$GIT_SSH_KEY" != "" ]; then
echo "Detected SSH key for git. Adding SSH config" >&1
echo "" >&1
# Ensure we have an ssh folder
if [ ! -d ~/.ssh ]; then
mkdir -p ~/.ssh
chmod 700 ~/.ssh
fi
# Load the private key into a file.
echo $GIT_SSH_KEY | base64 --decode > ~/.ssh/deploy_key
# Change the permissions on the file to
# be read-only for this user.
chmod o-w ~/
chmod 700 ~/.ssh
chmod 600 ~/.ssh/deploy_key
# Setup the ssh config file.
echo -e "Host bitbucket.org\n"\
" IdentityFile ~/.ssh/deploy_key\n"\
" HostName bitbucket.org\n" \
" IdentitiesOnly yes\n"\
" UserKnownHostsFile=/dev/null\n"\
" StrictHostKeyChecking no"\
> ~/.ssh/config
echo "eval `ssh-agent -s`"
eval `ssh-agent -s`
echo "ssh-add -l"
ssh-add -l
echo "ssh-add ~/.ssh/deploy_key"
ssh-add ~/.ssh/deploy_key
# uncomment to check that everything works just fine
# ssh -v git#bitbucket.org
fi
You can use in package.json private repository with authentication example below:
https://usernamegit:passwordgit#github.com/reponame/web/tarball/branchname
In short it is not possible. The best solution to this problem I came up with is to use the new git subtree's. At the time of writing they are not in the official git source and so needs to be installed manual but they will be included in v1.7.11. At the moment it is available on homebrew and apt-get. it is then a case of doing
git subtree add -P /node_modules/someprivatemodue git#github.......someprivatemodule {master|tag|commit}
this bulks out the repo size but an update is easy by doing the command above with gitsubtree pull.
I have done this before with modules from github. Npm currently accepts the name of the package or a link to a tar.gz file which contains the package.
For example if you want to use express.js directly from Github (grab the link via the download section) you could do:
"dependencies" : {
"express" : "https://github.com/visionmedia/express/tarball/2.5.9"
}
So you need to find a way to access you repository as a tar.gz file via http(s).