I am facing an issue wherein I uploaded a server.key in Download Secure File utility but in my pipeline, it is treating it as D:a_tempserver.key - when I call it in my task using echo $(server.secureFilePath). Any idea what could be the issue?
Error:
ERROR running auth:jwt:grant: We encountered a JSON web token error, which is likely not an issue with Salesforce CLI. Here’s the error: ENOENT: no such file or directory, open 'D:\a\1\s\a_tempserver.key'
In your current situation, we recommend you can use the $(Agent.TempDirectory)/server.key instead of the $(server.secureFilePath). In the auzre devops, if we upload the secure file to pipeline, then it will put the file in the temp directory not under the source directory.
Here are some screenshots about my test, hope this will help you.
The Secure File directory:
The result of $(Agent.TempDirectory):
Related
I'm working on a DoFn that writes to Elastic Search App Search (elastic_enterprise_search.AppSearch). It works fine when I run my pipeline using the DirectRunner.
But when I deploy to DataFlow the elasticsearch client fails because, I suppose, it can't access a certificate store:
File "/usr/local/lib/python3.8/site-packages/urllib3/util/ssl_.py", line 402, in ssl_wrap_socket
context.load_verify_locations(ca_certs, ca_cert_dir, ca_cert_data)
FileNotFoundError: [Errno 2] No such file or directory
Any advice on how to overcome this sort of problem? I'm finding it difficult to get any traction on how to solve this on google.
Obviously urllib3 is set up properly on my local machine for DirectRunner. I have "elastic-enterprise-search" in the REQUIRED_PACKAGES key of setup.py for my package along with all my other dependencies:
REQUIRED_PACKAGES = ['PyMySQL', 'sqlalchemy',
'cloud-sql-python-connector', 'google-cloud-pubsub', 'elastic-enterprise-search']
Can I package certificates up with my pipeline? How? Should I look into creating a custom docker image? Any hints on what it should look like?
Yes, creating a custom container that has the necessary credentials in it would work well here.
I'm having some problems with retrieving job output from an AWS glacier vault.
I initiated a job (aws glacier initiate-job), the job is indicated as complete via aws glacier, and then I tried to retrieve the job output
aws glacier get-job-output --account-id - --vault-name <myvaultname> --job-id <jobid> output.json
However, I receive an error: [Errno 2] No such file or directory: 'output.json'
Thinking that perhaps the file needed be created first, and if i did create the file first, (which really doesn't make sense), one would receive the [Errno 9] Bad file descriptor error.
I'm currently using the following version of the AWS CLI:
aws-cli/2.4.10 Python/3.8.8 Windows/10 exe/AMD64 prompt/off
I tried using the aws CLI from both an Administrative and non-Administrative command prompt with the same result. Any ideas on making this work?
From a related reported issue you can try run this command in a DOS window::
copy "c:\Program Files\Amazon\AWSCLI\botocore\vendored\requests\cacert.pem" "c:\Program Files\Amazon\AWSCLI\certifi"
It seems to be an certificate error
Occasionally the first upload of artifacts during a GitLab pipeline fail.
I'm getting the following error message in the logs:
2019-08-01 13:43:14,149 [http-nio-8082-exec-187] [ERROR]
(o.j.s.b.p.t.FilePersistenceHelper:87) - Failed moving
'path_to_artifactory\filestore_pre\dbRecord123.bin' to
'path_to_artifactory\filestore\5e\5ecc5f719b4442b9b04f9010646d34917aca8ca2'.
Access to file denied null 2019-08-01 13:43:14,149
[http-nio-8082-exec-187] [ERROR] (o.a.w.s.RepoFilter :251) - Upload
request of products-stage-qa:file_to_upload failed due to {}
java.nio.file.AccessDeniedException: Failed to persist file with sha1:
5ecc5f719b4442b9b04f9010646d34917aca8ca2
This seems to happen only during builds, but not during other uploads directly by a user.
It doesn't happen all the time, and only on first tries. But I haven't found any logic when the first try fails or succeeds. It doesn't seem to have anything to do with file types or the like. I can't really determine if it has anything to do with network speeds though since I only have access to part of the infrastructure.
I found an open ticket with the same error message, but only for Conan and for us it only happens with ivy repositories
We are using Artifactory 6.9.1 and GitLab 12.0.3 starter
This looks to be a permission issue. You are getting an error message that states that the move failed due to "Access to file denied".
You can try to log in to the server using the "artifactory" user and manually move the file called "path_to_artifactory\filestore_pre\dbRecord123.bin" to "path_to_artifactory\filestore\5e\5ecc5f719b4442b9b04f9010646d34917aca8ca2" and see if you have any issues with this. To log in to the server with the "artifactory" user you can use the command "sudo -s -u artifactory".
You will also need to make sure that all filestore and its subdirectories are owned by the "artifactory" user and have the correct permissions.
Hope this helps.
I have the google api enabled and an oauth 2.0 id created on google console and i get prompted in a web browser to authorize access but then I get an FileNotFoundError error in Jupyter Labs
gc = pygsheets.authorize(outh_file='client_secret.json')
Authentication successful.
Storing credentials to C:\Users\me\Documents\Folder\PythonProject\sheets.googleapis.com-python.json
The token file is created successfully but the NotFoundError is looking in a temp folder not the folder it saved the file
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\me\\AppData\\Local\\Temp\\02b35e51-6576-4739-9f1c-135348c707f0\\www.googleapis.com,drive,v3,files,corpora=user&pageSize=500&fields=files%28id%2C+name%29&q=mimeType%3D%27application%2Fvnd.google-apps.spreadsheet%27&supportsTeamDrives=false&includeTeamDriveItems=fal,6fa737f4e6c871f0b9ea9ea38467b8b6'
This is a known bug and is fixed in the trunk. So, either install pygsheets from staging branch
pip install https://github.com/nithinmurali/pygsheets/archive/staging.zip
or as a workaround disable cache
gc = pygsheets.authorize(outh_file='client_secret.json', no_cache=True)
i'm trying to run a simple executable using an Azure Web Role.
The executable is stored in the Web Role's local storage.
The executable produces a log.txt file once it has been run.
This is the method I am using to run the executable:
public void RunExecutable(string path)
{
Process.Start(path);
}
Where path is localStorage.RootPath + "Application.exe"
The problem I am facing is that when I open the local storage folder the executable is there however there is no log.txt file.
I have tested the executable, it works if I manually run it, it produces the log.txt file.
Can anyone see the problem?
Try setting an explicit WorkingDirectory for the process... I wonder if log.txt is being created, just not where you expect. (Or perhaps the app is trying to create log.txt but failing because of the permissions on the directory it's trying to create it in.)
If you remote desktop into the instance, can't you find the file created at E:\approot\ folder ? As Steve said, using a WorkingDirectory for the process will fix the issue
You can use Environment.GetEnvironmentVariable("RoleRoot") to construct the URL to your application root