I try to clone a repository from gcloud.
Here is my configuration:
$ gcloud info
Google Cloud SDK [183.0.0]
Platform: [Linux, x86_64] ('Linux', 'debian', '4.9.0-4-amd64', '#1 SMP Debian 4.9.65-3 (2017-12-03)', 'x86_64', '')
Python Version: [2.7.13 (default, Nov 24 2017, 17:33:09) [GCC 6.3.0 20170516]]
Python Location: [/usr/bin/python2]
Site Packages: [Disabled]
Installation Root: [/usr/lib/google-cloud-sdk]
Installed Components:
core: [2017.12.08]
app-engine-python: [1.9.64]
beta: [2017.12.08]
gsutil: [4.28]
bq: [2.0.27]
alpha: [2017.12.08]
System PATH: [/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games]
Python PATH: [/usr/bin/../lib/google-cloud-sdk/lib/third_party:/usr/lib/google-cloud-sdk/lib:/usr/lib/python2.7:/usr/lib/python2.7/plat-x86_64-linux-gnu:/usr/lib/python2.7/lib-tk:/usr/lib/python2.7/lib-old:/usr/lib/python2.7/lib-dynload]
Cloud SDK on PATH: [False]
Kubectl on PATH: [/usr/bin/kubectl]
Installation Properties: [/usr/lib/google-cloud-sdk/properties]
User Config Directory: [/home/me/.config/gcloud]
Active Configuration Name: [default]
Active Configuration Path: [/home/me/.config/gcloud/configurations/config_default]
Account: [account#client.com]
Project: [ipcloud-viewer]
Current Properties:
[core]
project: [ipcloud-viewer]
account: [account#client.com]
disable_usage_reporting: [True]
[compute]
region: [europe-west1]
zone: [europe-west1-d]
Logs Directory: [/home/me/.config/gcloud/logs]
Last Log File: [/home/me/.config/gcloud/logs/2017.12.21/11.39.49.435511.log]
git: [git version 2.11.0]
ssh: [OpenSSH_7.4p1 Debian-10+deb9u2, OpenSSL 1.0.2l 25 May 2017]
But when I whant to clone I've got this:
$ gcloud source repos clone repo --project=client_project --account=account#client.com
Clonage dans '/home/me/project/temp/repo'...
fatal: remote error: Access denied to me#other.fr
ERROR: (gcloud.source.repos.clone) Command '['git', 'clone', u'https://source.developers.google.com/p/client_project/r/repo', '/home/me/project/temp/repo', '--config', 'credential.helper=!gcloud auth git-helper --account=account#client.com --ignore-unknown $#']' returned non-zero exit status 128
As you can see, I'm logged with account#client.com and during the process, the account me#other.fr is used... and I do not know why !
Any idea what is the problem ?
BTW, I delete my ~/.config/gcloud and redone a gcloud init before doing all this....
EDIT: solution founded.
I had a ~/.netrc file information about mu me#other.fr account... I removed it and it worked !
The error suggests that the original repository (the one you're trying to clone) has me#other.fr as owner, but you're trying to access it (via git, under the hood) using the account#client.com account - the gcloud command executes under a single account.
You could try to add the account#client.com account as a project member of the original project, allowing it to access that private repository for cloning, see Adding project members and setting permissions.
Related
I have a 2.x python app and some 3.x python app which are running on google app engine.
Recently, I had updated the 2.x app without any issue. Now, when I'm trying to deploy an update for a 3.x python app, I'm getting an error "Error Response: [7] Failed to create cloud build: Permission denied on"
Services to deploy:
descriptor: [C:\Users\artha\Documents\gae billApp\CbicNtfnAndAutoMailer\app.yaml]
source: [C:\Users\artha\Documents\gae billApp\CbicNtfnAndAutoMailer]
target project: [cbicntfnandautomailer]
target service: [default]
target version: [1]
target url: [https://cbicntfnandautomailer.appspot.com]
target service account: [App Engine default service account]
Do you want to continue (Y/n)? Y
Beginning deployment of service [default]...
#============================================================#
#= Uploading 0 files to Google Cloud Storage =#
#============================================================#
File upload done.
Updating service [default]...failed.
ERROR: (gcloud.app.deploy) Error Response: [7] Failed to create cloud build: Permission denied on 'locations/asia-south1' (or it may not exist)..
Previously, I did not face any issue.
gcloud app describe shows me
authDomain: gmail.com
codeBucket: staging.cbicntfnandautomailer.appspot.com
databaseType: CLOUD_DATASTORE_COMPATIBILITY
defaultBucket: cbicntfnandautomailer.appspot.com
defaultHostname: cbicntfnandautomailer.appspot.com
featureSettings:
splitHealthChecks: true
useContainerOptimizedOs: true
gcrDomain: asia.gcr.io
id: cbicntfnandautomailer
locationId: asia-south1
name: apps/cbicntfnandautomailer
serviceAccount: cbicntfnandautomailer#appspot.gserviceaccount.com
servingStatus: SERVING
I have also tried disabling and re-enabling cloud build, but to no avail...
Can you please advice how to resolve the issue, thanks!!!
EDIT: As a workaround, created a seperate project and deployed there to resolve the issue, but the root cause still remains unknown!!
Check if you reached the limit of build-triggers allowed per region:
Cloud build limits
I try to deploy the rails 6 App on platform.sh,
I have this error when deploying my project in Rails 6 with Webpacker 5.2.1.
I have done days of research on google, without success.
I have NVM installed locally, node and npm, I will say that everything is fine locally, the webpacker is compiled correctly, but not on the remote machine.
and this error makes the deployment fail.
W: Webpacker requires Node.js ">=10.17.0" and you are using v6.17.1
W: Please upgrade Node.js https://nodejs.org/en/download/
W: Exiting!
E: Error building project: Step failed with status code 1.
E: Error: Unable to build application, aborting.
when I want to compile the assets with this line on Build hook.
RAILS_ENV=production bundle exec rails webpacker:install
can you help me
I'll share my config
.platform.app.yaml
# The name of this app. Must be unique within a project.
name: app
type: 'ruby:2.7'
mounts:
log:
source: local
source_path: log
tmp:
source: local
source_path: tmp
relationships:
postgresdatabase: 'dbpostgres:postgresql'
# The size of the persistent disk of the application (in MB).
disk: 1024
hooks:
build: |
bundle install --without development test
RAILS_ENV=production bundle exec rails webpacker:install
deploy: |
RAILS_ENV=production bundle exec rake db:migrate
web:
upstream:
socket_family: "unix"
commands:
start: "unicorn -l $SOCKET -E production config.ru"
locations:
'/':
root: "public"
passthru: true
expires: 1h
allow: true
services.yaml
dbpostgres:
# The type of your service (postgresql), which uses the format
# 'type:version'. Be sure to consult the PostgreSQL documentation
# (https://docs.platform.sh/configuration/services/postgresql.html#supported-versions)
# when choosing a version. If you specify a version number which is not available,
# the CLI will return an error.
type: postgresql:13
# The disk attribute is the size of the persistent disk (in MB) allocated to the service.
disk: 9216
configuration:
extensions:
- plpgsql
- pgcrypto
- uuid-ossp
routes.yaml
# Each route describes how an incoming URL is going to be processed by Platform.sh.
"https://www.{default}/":
type: upstream
upstream: "app:http"
"https://{default}/":
type: redirect
to: "https://www.{default}/"
As the title suggests, I'm using an Azure dynamic inventory file and am having an issue running a playbook against the collected inventory.
I'm using Ansible 2.9.1 and used the instructions found here to setup the inventory file.
$ ansible --version
ansible 2.9.1
config file = None
configured module search path = ['/home/myuser/.ansible/plugins/modules',
'/usr/share/ansible/plugins/modules']
ansible python module location = /home/myuser/.local/lib/python3.6/site-packages/ansible
executable location = /home/myuser/.local/bin/ansible
python version = 3.6.9 (default, Sep 11 2019, 16:40:19) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]
My inventory file:
plugin: azure_rm
include_vm_resource_groups:
- mytestrg
auth_source: cli
cloud_environment: AzureUSGovernment
hostvar_expressions:
ansible_connection: "'winrm'"
ansible_user: "'azureuser"
ansible_password: "'Password1'"
ansible_winrm_server_cert_validation: "'ignore'"
keyed_groups:
- prefix: some_tag
key: tags.sometag | default('none')
exclude_host_filters:
- powerstate != 'running'
Simple ad-hoc commands, like ping are successful when using the inventory file. What I'm not able to get working though is running a playbook against it.
My playbook:
- hosts: all
name: Run whoami
tasks:
- win_command: whoami
register: whoami_out
- debug:
var: whoami_out
Command I'm using to run the playbook:
ansible-playbook -i ./inventory_azure_rm.yaml whoami.yaml
Regardless of the hosts I target the playbook against, it fails with:
[WARNING]: Could not match supplied host pattern, ignoring:
playbooks/whoami.yaml
[WARNING]: No hosts matched, nothing to do
Any advice on how I can get past this? I appreciate any assistance!
I followed this excellent tutorial on Getting Started with Node.js but get the following error when running the last command gcloud preview app deploy .:
/Users/me/Google Drive/appengine-nodejs-quickstart> gcloud preview app deploy .
Updating module [default] from file [/Users/me/Google Drive/appengine-nodejs-quickstart/app.yaml]
08:51 PM Host: appengine.google.com
Error 400: --- begin server output ---
Failed Project Preparation (app_id='s~foo-bar-123'). Failed to enable APIs.
--- end server output ---
ERROR: (gcloud.preview.app.deploy) Command failed with error code [1]
I was able to run the app locally just fine using gcloud preview app run .. I checked and I do have Billing enabled for the project and some default APIs are enabled. Here's the results from docker version if it helps:
Client version: 1.3.2
Client API version: 1.15
Go version (client): go1.3.3
Git commit (client): 39fa2fa
OS/Arch (client): darwin/amd64
Any ideas what could be the issue?
I'm an engineer on the managed vms team and have looked into the issue. I believe the problem is we have changed our terms of service and you need to accept the before you can continue to use the product. Obviously, our messaging in this case is bad and needs to be fixed.
For now you need to go to cloud.google.com/console, select your project, and accept the new terms of service.
I have installed gitlab 6.4.3 and everything went well. But there is just one weird problem!
Users that are developer, can create merge requests and ALSO ACCEPT THEM !!! I read in gitlab help that a developer can just create new merge requests.
Also the project is private and the users are added by admin as developers.
And there is my gitlab information :
System information
System: Ubuntu 12.04
Current User: git
Using RVM: no
Ruby Version: 2.0.0p353
Gem Version: 2.0.14
Bundler Version:1.5.2
Rake Version: 10.1.0
GitLab information
Version: 6.4.3
Revision: 38397db
Directory: /home/git/gitlab
DB Adapter: mysql2
URL: http://git.technical.com
HTTP Clone URL: http://git.technical.com/some-project.git
SSH Clone URL: git#git.technical.com:some-project.git
Using LDAP: no
Using Omniauth: no
GitLab Shell
Version: 1.8.0
Repositories: /home/git/repositories/
Hooks: /home/git/gitlab-shell/hooks/
Git: /usr/bin/git
Do you have any idea what is going on here? and is there anyway to fix this?
There is no permission for accepting merge request... yet: issue 4763 is about that.
The app/views/projects/merge_requests/show/_mr_accept.html.haml file refers to:
- unless #allowed_to_merge
.bs-callout
%strong You don't have permission to merge this MR
And allowed_to_merge is defined by the function in app/controllers/projects/merge_requests_controller.rb:
def allowed_to_merge?
action = if project.protected_branch?(#merge_request.target_branch)
:push_code_to_protected_branches
else
:push_code
end
can?(current_user, action, #project)
end
So the only control for now is based on whether or not you can push to the target branch of the merge request.
Nothing more.