How to bulk grant access to many bitbucket repos? - bulk

I have 100+ private git repos in Bitbucket and want to allow access to read them for new private user. It is terrible to set this access to each separate repo. Is it possible to select several repos and allow access to them by one operation? May be it is possible to do this by loop and curl in bash using REST api of the Bitbucket?
Thanks for the answer in advance!

This code uses stashy - a python client for Bitbucket Server.
It may needs small modifications per the project's structure of your server.
#!/usr/bin/python
import stashy
import requests
from requests.auth import HTTPBasicAuth
# User to be granted access to
user = ""
bitbucket_url = "https://SERVER_URL"
bitbucket_username= "<bitbucket_username>"
bitbucket_passwd = "<pass>"
header = {'content-type': 'application/json'}
"""
Promote or demote a user's permission level.
Depending on context, you may use one of the following set of permissions:
project permissions:
* PROJECT_READ
* PROJECT_WRITE
* PROJECT_ADMIN
"""
permission = "PROJECT_READ"
stash = stashy.connect(bitbucket_url, bitbucket_username, bitbucket_passwd)
for project in stash.projects.list():
print("granting "+bitbucket_username+" access "+permission+" access to "+project["name"])
print(stash.projects[project["key"]].permissions.users.grant(user,permission))
https://gist.github.com/ibidani/9ae06c690fb32ee09aa6bb5480c18325

Related

How to retrieve the star counts in GitLab Python API?

I try to request the number of stars and commits of a public software repository in GitLab using its Python client. However, I keep getting GitlabHttpError 503 if executing the following script.
import gitlab
import requests
url = 'https://gitlab.com/juliensimon/huggingface-demos'
private_token = 'xxxxxxxx'
gl = gitlab.Gitlab(url, private_token=private_token)
all_projects = gl.projects.list(all=True)
I read the previous posts but none of them works for me: [1], [2], and [3]. People mentioned:
Retrying later usually works [I tried in different periods but still got the same error.]
Setting an environment variable for no_proxy [Not sure what it means for me? I do not set the proxy explicitly.]
The 503 response is telling you something - your base URL is off. You only need to provide the base GitLab URL so the client makes requests against its api/v4/ endpoint.
Either use https://gitlab.com only, so that the client will correctly call https://gitlab.com/api/v4 endpoints (instead of trying https://gitlab.com/juliensimon/huggingface-demos/api/v4 as it does now), or skip it entirely when using GitLab.com if you're on python-gitlab 3.0.0 or later.
# Explicit gitlab.com
url = 'https://gitlab.com'
gl = gitlab.Gitlab(url, private_token=private_token)
# Or just use the default for GitLab.com (python-gitlab 3.0.0+ required)
gl = gitlab.Gitlab(private_token=private_token)
Edit: The original question was about the 503, but the comment to my answer is a follow-up on how to get project details. Here's the full snippet, which also fetches the token from the environment instead:
import os
import gitlab
private_token = os.getenv("GITLAB_PRIVATE_TOKEN")
gl = gitlab.Gitlab(private_token=private_token)
project = gl.projects.get("juliensimon/huggingface-demos")
print(project.forks_count)
print(project.star_count)

Accessing GraphDB with RDF4J with an specific user

I'm using RDF4J to add RDF triples to a completely open (Security off) GraphDB instance. I use the RemoteRepositoryManager and it works fine:
RepositoryManager repositoryManager = new RemoteRepositoryManager(GraphDBInstanceURL);
Repository repository = repositoryManager.getRepository(graphDBrepoName);
RepositoryConnection repositoryConnection = repository.getConnection();
Now we need to add security to GraphDB, but it is not clear to me how to add the specific GraphDB user credentials in the above code. Any pointers are wellcome, thanks

Pass Credentials to access Network Drive using Groovy

I have a requirement of automating the file copy from one shared drive location to another shared drive. I have instructed to use Groovy for the same.
I'm completely new to Groovy. I managed to copy the file using targetlocation << sourcelocation.text. But it requires a username and password to access the shared drive. I'm not sure how to do that.
Any help would be appreciated.
If it is a Windows or Samba share, you could use jcifs to connect:
import jcifs.smb.SmbFile
import jcifs.smb.NtlmPasswordAuthentication
import jcifs.context.BaseContext
import jcifs.CIFSContext
import jcifs.config.PropertyConfiguration
import jcifs.Configuration
Configuration config = new PropertyConfiguration(new Properties())
CIFSContext context = new BaseContext(config)
context = context.withCredentials(new NtlmPasswordAuthentication(null, domain, userName, password))
SmbFile share = new SmbFile(url, context)
You can then copy the file that you want.

How to use google-api-client for Google Cloud Logging

I want to access to Google Cloud Platform Logging from a python script.
I have get to access to this logs from https://cloud.google.com/logging/docs/reference/v2/rest/v2/entries/list --> Try this API
Now I want to get the same, but from a Python script. I saw that in step before, is created an authorization token automatically.
I am trying with this code sample, but then I don't know how to POST https://logging.googleapis.com/v2/entries:list using discovery:
from google.oauth2 import service_account
import googleapiclient.discovery
credentials = service_account.Credentials.from_service_account_file(service_account_file)
logging = googleapiclient.discovery.build('logging', 'v2', credentials=credentials)
Then I have tried with this code sample:
import requests
payload = {
"projectIds": [
"my-proyect"
],
"resourceNames": [],
"filter": "resource.type=cloudiot_device",
"orderBy": "timestamp desc",
"pageSize": 1
}
headers = {"Authorization": "Bearer AAAAAAA"}
r = requests.post("https://logging.googleapis.com/v2/entries:list", params=payload, headers=headers)
That code sample works correctly but where it puts AAAAAAA token I copy and paste the code that I saw in https://cloud.google.com/logging/docs/reference/v2/rest/v2/entries/list but I don't know how to generate this token from a python script.
Thanks!
This is less easy to find because many of Google's Cloud (!) services now prefer Cloud Client libraries.
However...
import google.auth
from googleapiclient import discovery
credentials, project = google.auth.default()
service = discovery.build("logging", "v2", credentials=credentials)
Auth: https://pypi.org/project/google-auth/
Now, this uses Google Application Default credentials and I recommend you create a service account, generate a key and grant the account the permission needed. You will then need to export GOOGLE_APPLICATION_CREDENTIALS before running your code.
PROJECT=[[YOUR-PROJECT]]
BILLING=[[YOUR-BILLING]]
ACCOUNT=[[YOUR-ACCOUNT]]
gcloud projects create ${PROJECT}
gcloud beta billing projects link ${PROJECT} \
--billing-account=${BILLING}
gcloud iam service-accounts create ${ACCOUNT} \
--project=${PROJECT}
EMAIL="${ACCOUNT}#${PROJECT}.iam.gserviceaccount.com"
gcloud iam service-accounts keys create ${PWD}/${ACCOUNT}.json \
--iam-account=${EMAIL} \
--project=${PROJECT}
# See: https://cloud.google.com/iam/docs/understanding-roles#logging-roles
gcloud projects add-iam-policy-binding ${PROJECT} \
--member=serviceAccount:${EMAIL} \
--role=roles/logging.viewer
export GOOGLE_APPLICATION_CREDENTIALS=${PWD}/${ACCOUNT}.json
python3 your-code.py
Ok, thanks to the Google Engineer, the first part of the solution is to disable the SDK's use of gRPC and force HTTP so that page_size is respected:
client = logging.Client(_use_grpc=0)
Alternatively, you can GOOGLE_CLOUD_DISABLE_GRPC="{{anything}}"
And the second part of the solution is to only iterate over the first page of page_size results:
iterator = logger.list_entries(
order_by=DESCENDING,
page_size=page_size,
)
print(type(iterator))
for entry in next(iterator.pages):
timestamp = entry.timestamp.isoformat()
print("{}".format(timestamp))
NOTE forcing HTTP entails logger.list_entries returning an HTTPIterator instead of a (gRPC) generator hence the ability to use next() and the pages property.
NOTE The 'trick' is to only enumerate the first page of n results. There may be multiple pages but we ignore subsequent ones.
I am using the following code sample to extract log information from Google Cloud Logging.
import os
from google.cloud import logging
from google.cloud.logging import DESCENDING
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "my-service-account-file"
def list_entries(logger_name):
"""Lists the most recent entries for a given logger."""
logging_client = logging.Client()
logger = logging_client.logger(logger_name)
print("Listing entries for logger {}:".format(logger.name))
filter_str = "resource.type=cloudiot_device AND resource.labels.device_num_id=00000000000 AND jsonPayload.eventType=PUBLISH"
for entry in logger.list_entries(filter_=filter_str, order_by=DESCENDING, page_size=10):
timestamp = entry.timestamp.isoformat()
print(" {}: {}".format(, timestamp, entry.payload))
list_entries("cloudiot.googleapis.com%2Fdevice_activity")
My goal is to run this python script every 5 minutes and get the last 5 entries from the Logging. My problem is that this code sample start extracting entries, but it never stoppes. How can I limit the number of entries?
Thanks!

How to delete GKE (Google Kubernetes Engine) cluster using python?

I'm new to GKE-Python. I would like to delete my GKE(Google Kubernetes Engine) cluster using a python script.
I found an API delete_cluster() from the google-cloud-container python library to delete the GKE cluster.
https://googleapis.dev/python/container/latest/index.html
But I'm not sure how to use that API by passing the required parameters in python. Can anyone explain me with an example?
Or else If there is any other way to delete the GKE cluster in python?
Thanks in advance.
First you'd need to configure the Python Client for Google Kubernetes Engine as explained on this section of the link you shared. Basically, set up a virtual environment and install the library with pip install google-cloud-container.
If you are running the script within an environment such as the Cloud Shell with an user that has enough access to manage the GKE resources (with at least the Kubernetes Engine Cluster Admin permission assigned) the client library will handle the necessary authentication from the script automatically and the following script will most likely work:
from google.cloud import container_v1
project_id = "YOUR-PROJECT-NAME" #Change me.
zone = "ZONE-OF-THE-CLUSTER" #Change me.
cluster_id = "NAME-OF-THE-CLUSTER" #Change me.
name = "projects/"+project_id+"/locations/"+zone+"/clusters/"+cluster_id
client = container_v1.ClusterManagerClient()
response = client.delete_cluster(name=name)
print(response)
Notice that as per the delete_cluster method documentation you only need to pass the name parameter. If by some reason you are just provided the credentials (generally in the form of a JSON file) of a service account that has enough permissions to delete the cluster you'd need to modify the client for the script and use the credentials parameter to get the client correctly authenticated in a similar fashion to:
...
client = container_v1.ClusterManagerClient(credentials=credentials)
...
Where the credentials variable is pointing to the JSON filename (and path if it's not located in the folder where the script is running) of the service account credentials file with enough permissions that was provided.
Finally notice that the response variable that is returned by the delete_cluster method is of the Operations class which can serve to monitor a long running operation in a similar fashion as to how it is explained here with the self_link attribute corresponding to the long running operation.
After running the script you could use a curl command in a similar fashion to:
curl -X GET \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
https://container.googleapis.com/v1/projects/[RPOJECT-NUMBER]/zones/[ZONE-WHERE-THE-CLUSTER-WAS-LOCATED]/operations/operation-[OPERATION-NUMBER]
by checking the status field (which could be in RUNNING state while it is happening) of the response to that curl command. Or your could also use the requests library or any equivalent to automate this checking procedure of the long running operation within your script.
This page contains an example for the command you are trying to perform.
To give some more details that are required for the command to succeed -
Your environment needs to contain environment variables, this page contains instructions for how to do that.
Once your environment is successfully authenticated we can run the delete cluster command like so -
from google.cloud import container_v1
client = container_v1.ClusterManagerClient()
response = client.delete_cluster(name=projects/<project>/locations/<location>/clusters/<cluster>)

Resources