I have a requirement of automating the file copy from one shared drive location to another shared drive. I have instructed to use Groovy for the same.
I'm completely new to Groovy. I managed to copy the file using targetlocation << sourcelocation.text. But it requires a username and password to access the shared drive. I'm not sure how to do that.
Any help would be appreciated.
If it is a Windows or Samba share, you could use jcifs to connect:
import jcifs.smb.SmbFile
import jcifs.smb.NtlmPasswordAuthentication
import jcifs.context.BaseContext
import jcifs.CIFSContext
import jcifs.config.PropertyConfiguration
import jcifs.Configuration
Configuration config = new PropertyConfiguration(new Properties())
CIFSContext context = new BaseContext(config)
context = context.withCredentials(new NtlmPasswordAuthentication(null, domain, userName, password))
SmbFile share = new SmbFile(url, context)
You can then copy the file that you want.
Related
I have 100+ private git repos in Bitbucket and want to allow access to read them for new private user. It is terrible to set this access to each separate repo. Is it possible to select several repos and allow access to them by one operation? May be it is possible to do this by loop and curl in bash using REST api of the Bitbucket?
Thanks for the answer in advance!
This code uses stashy - a python client for Bitbucket Server.
It may needs small modifications per the project's structure of your server.
#!/usr/bin/python
import stashy
import requests
from requests.auth import HTTPBasicAuth
# User to be granted access to
user = ""
bitbucket_url = "https://SERVER_URL"
bitbucket_username= "<bitbucket_username>"
bitbucket_passwd = "<pass>"
header = {'content-type': 'application/json'}
"""
Promote or demote a user's permission level.
Depending on context, you may use one of the following set of permissions:
project permissions:
* PROJECT_READ
* PROJECT_WRITE
* PROJECT_ADMIN
"""
permission = "PROJECT_READ"
stash = stashy.connect(bitbucket_url, bitbucket_username, bitbucket_passwd)
for project in stash.projects.list():
print("granting "+bitbucket_username+" access "+permission+" access to "+project["name"])
print(stash.projects[project["key"]].permissions.users.grant(user,permission))
https://gist.github.com/ibidani/9ae06c690fb32ee09aa6bb5480c18325
I'm making a python app that launches a batch.
I want, via user inputs, to create a pool.
For simplicity, I'll just add all the applications present in the batch account to the pool.
I'm not able to get the list of available application packages.
This is the portion of code:
import azure.batch.batch_service_client as batch
from azure.common.credentials import ServicePrincipalCredentials
credentials = ServicePrincipalCredentials(
client_id='xxxxx',
secret='xxxxx',
tenant='xxxx',
resource="https://batch.core.windows.net/"
)
batch_client = batch.BatchServiceClient(
credentials,
base_url=self.AppData['CloudSettings'][0]['BatchAccountURL'])
# Get list of applications
batchApps = batch_client.application.list()
I can create a pool, so credentials are good and there are applications but the returned list is empty.
Can anybody help me with this?
Thank you,
Guido
Update:
I tried:
import azure.batch.batch_service_client as batch
batchApps = batch.ApplicationOperations.list(batch_client)
and
import azure.batch.operations as batch_operations
batchApps = batch_operations.ApplicationOperations.list(batch_client)
but they don't seem to work. batchApps is always empty.
I don't think it's an authentication issue since I'd get an error otherwise.
At this point I wonder if it just a bug in the python SDK?
The SDK version I'm using is:
azure.batch: 4.1.3
azure: 4.0.0
This is a screenshot of the empty batchApps var:
Is this the link you are looking for:
Understanding the application package concept here: https://learn.microsoft.com/en-us/azure/batch/batch-application-packages
Since its python SDK in action here: https://learn.microsoft.com/en-us/python/api/azure-batch/azure.batch.operations.applicationoperations?view=azure-python
list operation and here is get
hope this helps.
I haven't tried lately using the Azure Python SDK but the way I solved this was to use the Azure REST API:
https://learn.microsoft.com/en-us/rest/api/batchservice/application/list
For the authorization, I had to create an application and give it access to the Batch services and the I programmatically generated the token with the following request:
data = {'grant_type': 'client_credentials',
'client_id': clientId,
'client_secret': clientSecret,
'resource': 'https://batch.core.windows.net/'}
postReply = requests.post('https://login.microsoftonline.com/' + tenantId + '/oauth2/token', data)
Problem:
trying to get an image out of azure fileshare for manipulation. I need to read the file as an Drawing.Image for manipulation. I cannot create a valid FileInfo object or Image using uncpath (which I need to do in order to use over IIS)
Current Setup:
Attach a virtual directory called Photos in IIS website pointing to UNCPath of the Azure file share (e.g. \myshare.file.core.windows.net\sharename\pathtoimages)
This works as http://example.com/photos/img.jpg so I know it is not a permissions or authentication issue.
For some reason though I cannot get a reference to File.
var imgpath = Path.Combine(Server.MapPath("~/Photos"),"img.jpg")
\\resolves as \\myshare.file.core.windows.net\sharename\pathtoimages\img.jpg
var fi = new FileInto(imgpath);
if(fi.exists) //this returns false 100% of the time
var img = System.Drawing.Image.FromFile(fi.FullName);
The problem is that the file is never found to exist, even though I cant take that path and put it in an explorer window and return the img.jpg 100% of the time.
Does anyone have any idea why this would not be working?
Do I need to be using CloudFileShare object to just get a read of a file I know is there?
It turns out the issue is that I needed to wrap my code in an impersonation of the azure file share userid since the virtual directory is not really in play at all at this point.
using (new impersonation("UserName","azure","azure pass"))
{
//my IO.File code
}
I used this guys impersonation script found here.
Can you explain why DirectoryInfo.GetFiles produces this IOException?
I have tried downloading small files from google Colaboratory. They are easily downloaded but whenever I try to download files which have a large sizes it shows an error? What is the way to download large files?
If you have created a large zip file, say my_archive.zip, then you can download it as following:
Mount your Google drive from your Google colab Notebook. You will
be asked to enter a authentication code.
from google.colab import drive
drive.mount('/content/gdrive',force_remount=True)
Copy the zip file to any of your google drive folder (e.g. downloads folder). You may also copy the file on 'My Drive' which is a root folder.
!cp my_archive.zip '/content/gdrive/My Drive/downloads/'
!ls -lt '/content/gdrive/My Drive/downloads/'
Finally, you can download the zip file from your Google drive to your local machine.
This is how I handle this issue:
from google.colab import auth
from googleapiclient.http import MediaFileUpload
from googleapiclient.discovery import build
auth.authenticate_user()
Then click on the link, authorize Google Drive and paste the code in the notebook.
drive_service = build('drive', 'v3')
def save_file_to_drive(name, path):
file_metadata = {
'name': name,
'mimeType': 'application/octet-stream'
}
media = MediaFileUpload(path,
mimetype='application/octet-stream',
resumable=True)
created = drive_service.files().create(body=file_metadata,
media_body=media,
fields='id').execute()
print('File ID: {}'.format(created.get('id')))
return created
Then:
save_file_to_drive(destination_name, path_to_file)
This will save whatever files to your Google Drive, where you can download or sync them or whatever.
I tried many different solutions. The only way that was effective and quick is to zip the file/folder and then download it directly:
!zip -r model.zip model.pkl
And to download:
Google colab doesn't allow you to download large files using files.download(). But you can use one of the following methods to access it:
The easiest one is to use github to commit and push your files and then clone it to your local machine.
You can mount google-drive to your colab instance and write the files there.
I'm trying to use a public Google calendar in a webpage that will need editing functionalities.
To that effect, I created the calendar and made it public. I then created a Google service account and the related client id.
I also enabled the Calendar API and added the v3 dlls to the project.
I downloaded the p12 certificate and that's when the problems start.
The call to Google goes with a X509 cert but the way the .NET framework is built is that it uses a user temp folder.
Since it's a shared host for the web server (GoDaddy), I cannot have the app pool identity modified.
As a result, I'm getting this error:
System.Security.Cryptography.CryptographicException: The system cannot
find the file specified.
when calling:
X509Certificate2 certificate = new X509Certificate2(GoogleOAuth2CertificatePath,
"notasecret", X509KeyStorageFlags.Exportable);
that cerificate var is then to be used in the google call:
ServiceAccountCredential credential = new ServiceAccountCredential(
new ServiceAccountCredential.Initializer(GoogleOAuth2EmailAddress)
{
User = GoogleAccount,
Scopes = new[] { CalendarService.Scope.Calendar }
}.FromCertificate(certificate));
... but I never get that far.
Question: is there a way to make the call differently, i.e. not to use a X509 certificate but JSON instead?
Or can I get the x509 function to use a general temp location rather than a user location to which I have no access to since I can't change the identity in the app pool?
Since I'm completely stuck, any help would be appreciated.
One simple option which avoids needing to worry about file locations is to embed the certificate within your assembly. In Visual Studio, right-click on the file and show its properties. Under Build Action, pick "Embedded resource".
You should then be able to load the data with something like this:
// In a helper class somewhere...
private static byte[] LoadResourceContent(Type type, string resourceName)
{
string fullName = type.Namespace + "." + resourceName;
using (var stream = type.Assembly.GetManifestResourceStream(fullName)
{
var output = new MemoryStream();
stream.CopyTo(output);
return output.ToArray();
}
}
Then:
byte[] data = ResourceHelper.LoadResourceContent(typeof(MyType), "Certificate.p12");
var certificate = new X509Certificate2(data, "notasecret", X509KeyStorageFlags.Exportable);
Here MyType is some type which is in the same folder as your resource.
Note that there are lots of different "web" project types in .NET... depending on the exact project type you're using, you may need to tweak this.