Latest Update:http request within task are working but not https.
I am trying to use Celery Task to Upload Files to Google Drive, once the files have been Uploaded to Local Web Server for Backup.
I saw multiple question asking similar things .I cannot make Google API work in a celery task but it works when I run it without delay().The questions didn't recieve any answers.
Question 1 where #chucky struggling like me.
Implementation and Information:
Server: Django Development Server (localhost)
Celery: Working with RabbitMQ
Database: Postgres
GoogleDriveAPI: V3
I was able to get credentials and token for accessing drive files and
display first ten files,If the quickstart file is run separately.
Google Drive API Quickstart.py
Running this Quickstart.py shows files and folder list of drive.
So I added the same code with all included imports in tasks.py task
name create_upload_folder() to test whether task will work and show
list of files.
I am running it with a Ajax Call but i keep getting this error.
So tracing back show that this above error occurs due to:
Root of the Error is :
[2021-07-13 21:10:03,979: WARNING/MainProcess]
[2021-07-13 21:10:04,052: ERROR/MainProcess] Task create_upload_folder[2463ad5b-4c7c-4eba-b862-9417c01e8314] raised unexpected: ServerNotFoundError('Unable to find the server at www.googleapis.com')
Traceback (most recent call last):
File "f:\repos\vuetransfer\vuenv\lib\site-packages\httplib2\__init__.py", line 1346, in _conn_request
conn.connect()
File "f:\repos\vuetransfer\vuenv\lib\site-packages\httplib2\__init__.py", line 1136, in connect
sock.connect((self.host, self.port))
File "f:\repos\vuetransfer\vuenv\lib\site-packages\eventlet\greenio\base.py", line 257, in connect
if socket_connect(fd, address):
File "f:\repos\vuetransfer\vuenv\lib\site-packages\eventlet\greenio\base.py", line 40, in socket_connect
err = descriptor.connect_ex(address)
It's failing on the name resolution (can't find the IP of www.googleapis.com) because most likely it can't contact a DNS server that has the IP (or can't contact any DNS server).
Make sure you have your DNS server properly set up or if you are behind a corporate proxy/VPN that you're using it.
You can verify it working by fetching the IPs manually:
nslookup www.googleapis.com
$ nslookup www.googleapis.com
Non-authoritative answer:
Name: www.googleapis.com
Address: 172.217.23.234
Name: www.googleapis.com
Address: 216.58.201.74
Name: www.googleapis.com
Address: 172.217.23.202
Name: www.googleapis.com
Address: 2a00:1450:4014:80c::200a
Name: www.googleapis.com
Address: 2a00:1450:4014:800::200a
Name: www.googleapis.com
Address: 2a00:1450:4014:80d::200a
In case you can fetch the IPs manually there's a connectivity problem with Python itself not being aware of the proxies (that might have been set up on your PC) and for this try to use:
http_proxy=http://your.proxy:port
https_proxy=http://your.proxy:port
in the environment or as a command prefix or directly in the HTTP client configuration httplib2 uses.
The major problem is with using httplib2 with python3 or some other complication even though google_client_api for python says it is fully supported you have some problems with requests.Atleast the problem is there for me with python3 on Windows.
Which after a lot of research i found that falling back to python2 is a solution but another one can be using httplib2shim after creating a credentials for your service and before .build() for your service you need to call
.
.
httplib2shim.patch()
service = build(API_SERVICE_NAME, API_VERSION, credentials=creds)
This will solve the issue of httplib2 not able to find the www.googleapis.com
Related
I'm working on a DoFn that writes to Elastic Search App Search (elastic_enterprise_search.AppSearch). It works fine when I run my pipeline using the DirectRunner.
But when I deploy to DataFlow the elasticsearch client fails because, I suppose, it can't access a certificate store:
File "/usr/local/lib/python3.8/site-packages/urllib3/util/ssl_.py", line 402, in ssl_wrap_socket
context.load_verify_locations(ca_certs, ca_cert_dir, ca_cert_data)
FileNotFoundError: [Errno 2] No such file or directory
Any advice on how to overcome this sort of problem? I'm finding it difficult to get any traction on how to solve this on google.
Obviously urllib3 is set up properly on my local machine for DirectRunner. I have "elastic-enterprise-search" in the REQUIRED_PACKAGES key of setup.py for my package along with all my other dependencies:
REQUIRED_PACKAGES = ['PyMySQL', 'sqlalchemy',
'cloud-sql-python-connector', 'google-cloud-pubsub', 'elastic-enterprise-search']
Can I package certificates up with my pipeline? How? Should I look into creating a custom docker image? Any hints on what it should look like?
Yes, creating a custom container that has the necessary credentials in it would work well here.
Recently I have needed to add web sockets to my backend application currently hosted on Google App Engine (GAE) standard environment. Because web sockets are a feature only available in GAE's flexible environment, I have been attempting a redeployment but with little success.
To make the change to a flexible environment I have updated the app.yaml file from
runtime: nodejs10
env: standard
to
runtime: nodejs
env: flex
While previously working in the standard environment, now with env: flex when I run the command gcloud app deploy --app-yaml=app-staging.yaml --verbosity=debug I get the following stack trace:
Do you want to continue (Y/n)? Y
DEBUG: No bucket specified, retrieving default bucket.
DEBUG: Using bucket [gs://staging.finnsalud.appspot.com].
DEBUG: Service [appengineflex.googleapis.com] is already enabled for project [finnsalud]
Beginning deployment of service [finnsalud-staging]...
INFO: Using ignore file at [~/checkouts/twilio/backend/.gcloudignore].
DEBUG: not expecting type '<class 'NoneType'>'
Traceback (most recent call last):
File "/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 982, in Execute
resources = calliope_command.Run(cli=self, args=args)
File "/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py", line 809, in Run
resources = command_instance.Run(args)
File "/google-cloud-sdk/lib/surface/app/deploy.py", line 115, in Run
return deploy_util.RunDeploy(
File "/google-cloud-sdk/lib/googlecloudsdk/command_lib/app/deploy_util.py", line 669, in RunDeploy
deployer.Deploy(
File "/google-cloud-sdk/lib/googlecloudsdk/command_lib/app/deploy_util.py", line 428, in Deploy
source_files = source_files_util.GetSourceFiles(
File "/google-cloud-sdk/lib/googlecloudsdk/command_lib/app/source_files_util.py", line 184, in GetSourceFiles
return list(it)
File "/google-cloud-sdk/lib/googlecloudsdk/command_lib/util/gcloudignore.py", line 233, in GetIncludedFiles
six.ensure_str(upload_directory), followlinks=True):
File "//google-cloud-sdk/lib/third_party/six/__init__.py", line 884, in ensure_str
raise TypeError("not expecting type '%s'" % type(s))
TypeError: not expecting type '<class 'NoneType'>'
ERROR: gcloud crashed (TypeError): not expecting type '<class 'NoneType'>'
In this stack trace, it mentions an error in google-cloud-sdk/lib/googlecloudsdk/command_lib/util/gcloudignore.py so I had also reviewed my .gcloudignore file but was unable to find anything out of place:
.gcloudignore
.git
.gitignore
node_modules/
In an attempt to work around this bug I tried removing my .gcloudignore file which resulted in a different error, but still failed nevertheless:
Do you want to continue (Y/n)? Y
DEBUG: No bucket specified, retrieving default bucket.
DEBUG: Using bucket [gs://staging.finnsalud.appspot.com].
DEBUG: Service [appengineflex.googleapis.com] is already enabled for project [finnsalud]
Beginning deployment of service [finnsalud-staging]...
DEBUG: expected str, bytes or os.PathLike object, not NoneType
Traceback (most recent call last):
File "/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 982, in Execute
resources = calliope_command.Run(cli=self, args=args)
File "/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py", line 809, in Run
resources = command_instance.Run(args)
File "/google-cloud-sdk/lib/surface/app/deploy.py", line 115, in Run
return deploy_util.RunDeploy(
File "/google-cloud-sdk/lib/googlecloudsdk/command_lib/app/deploy_util.py", line 669, in RunDeploy
deployer.Deploy(
File "/google-cloud-sdk/lib/googlecloudsdk/command_lib/app/deploy_util.py", line 428, in Deploy
source_files = source_files_util.GetSourceFiles(
File "/google-cloud-sdk/lib/googlecloudsdk/command_lib/app/source_files_util.py", line 184, in GetSourceFiles
return list(it)
File "/google-cloud-sdk/lib/googlecloudsdk/api_lib/app/util.py", line 165, in FileIterator
entries = set(os.listdir(os.path.join(base, current_dir)))
File "/usr/local/Cellar/python#3.8/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/posixpath.py", line 76, in join
a = os.fspath(a)
TypeError: expected str, bytes or os.PathLike object, not NoneType
ERROR: gcloud crashed (TypeError): expected str, bytes or os.PathLike object, not NoneType
Thinking maybe this was an error relating to the version of my CLI I have also run the following commands to try and update:
gcloud app update
gcloud components update
Unfortunately, this had no change on the output.
I have noticed that when I run this command with the app.yaml env value set to flexible, there are no updates to the logging section on google cloud and no changes to the files uploaded to the project's storage bucket. To me, this indicates that the crash is occurring in the CLI before any communication to the google cloud services is made. If this is correct, then it seems unlikely that the cause of the error would be related to a bad configuration on google cloud and must be related to something (software or configuration) on my local machine.
I have also tried using the 'Hello World' app.yaml configuration on the flexible environments 'Getting Started' page to rule out a configuration error my own application's app.yaml but this also had no change on the output.
Finally, if at any point I change env: flex back to env: standard then the issue does disappear. Unfortunately, as stated above, this won't work for deploying my web sockets feature.
This has gotten me thinking that possibly the error is due to a bug with the gcloud cli application. However, if this were the case, I would have expected to see many more bug reports for this issue by others whom are also using the GAE's flexible environment.
Regardless, given this stack trace points to code within the gcloud cli, I have opened a bug ticket with google which can be found here: https://issuetracker.google.com/issues/176839574
I have also seen this similar SO post, but it is not the exact error I am experiencing and remains unresolved: gcloud app deploy fails with flexible environment
If anyone has any ideas on other steps to try or methods to overcome this issue, I would be immensely grateful if you drop a note on this post. Thanks!
I deployed a nodejs application using the Quickstart for Node.js in the standard environment
Then I changed the app.yaml file from :
runtime: nodejs10
to
runtime: nodejs
env: flex
Everything worked as expected.
It might be related to your specific use case.
Surprisingly, this issue does seem to be related to a bug in the gcloud cli. However, there does seem to be a workaround.
When a --appyaml flag is specified for a deployment to the flex environment, then the CLI crashes with the messages outlines in my question above. However, if you copy your .yaml file renaming to app.yaml (the default) and delete this --appyaml flag when deploying then the build will proceed without errors.
If you have also experienced this error, please follow the google issue as I am working with the google engineers to be sure they reproduce and eventually fix this bug.
Broken app.yaml
runtime:nodejs14
Fixed app.yaml
runtime: nodejs14
I am dead serious. And :
glcoud info --run-diagnostics
was ZERO HELP.
Once I did this the "ERROR: gcloud crashed (TypeError): expected string or bytes-like object" went away.
I guess "colon + space" is part of the spec:
Why does the YAML spec mandate a space after the colon?
I'm new on cloud computing and I'm trying to use SSH to control my VM instance but when I use command (with debug)
gcloud compute ssh my-instance-name --verbosity=debug
it's show error
DEBUG: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code
[255]. Traceback (most recent call last): File
"/google/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line
983, in Execute
resources = calliope_command.Run(cli=self, args=args) File "/google/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py",
line 784, in Run
resources = command_instance.Run(args) File "/google/google-cloud-sdk/lib/surface/compute/ssh.py", line 262, in
Run
return_code = cmd.Run(ssh_helper.env, force_connect=True) File "/google/google-cloud-sdk/lib/googlecloudsdk/command_lib/util/ssh/ssh.py",
line 1256, in Run
raise CommandError(args[0], return_code=status) CommandError: [/usr/bin/ssh] exited with return code [255]. ERROR:
(gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
I try to solve the problem in this link but it's not work
https://groups.google.com/forum/#!topic/gce-discussion/O-c10TM4ZLM
SSH error code 255 is a general error returned by GCP. You can try one of the following options.
1. Wait a few minutes and try again. It is possible that:
The instance has not finished starting up.
Metadata for SSH keys has not finished being propagated to the project or instance.
The Guest Environment has not yet read the SSH keys metadata.
2. Verify that SSH access to the instance is not blocked by a firewall.
gcloud compute firewall-rules list | grep "tcp:22"
If necessary, create a firewall rule to allow TCP 22 for a given VPC network, subnet, or instance tag.
gcloud compute firewall-rules create ssh-allow-incoming --priority=0 --allow=tcp:22 --network=[VPC-Network]
3. Make sure that the root volume is not out of disk space. Messages like the following will be visible in the console log when it is out of disk space:
...No space left on device...
...google-accounts: ERROR Exception calling the response handler.
[Errno 2] No usable temporary directory found in ['/tmp', '/var/tmp',
'/usr/tmp', '/']...
4. Make sure that the instance has not run out of memory
5. Verify that temporary SSH Keys metadata is set for either the project or instance.
Finally you could follow any of their supported or third-party methods
Assuming you have the correct IAM permissions, it is much easier and preferred by GCP to use OSlogin to ssh into an instance, rather than manage ssh keys
in cloud shell, enter this
gcloud compute --project PROJECTID project-info add-metadata --metadata enable-oslogin=TRUE
This enables OSLogin on all instances in a project, instead of using ssh keys gcp will check your IAM permissions and authenticate based on those.
If you are not project owner, make sure you have the compute.osloginviewer or admin permissions in Cloud IAM
Once enables, try SSHing into the instance again using the command you posted.
This is not a concrete answer but I think at first you should set your project by :
gcloud config set project PROJECT_ID
Then
gcloud compute ssh my-instance-name --verbosity=debug
This link would be useful:
https://cloud.google.com/sdk/gcloud/reference/compute/ssh
I have a GitLab project which is set up as follows:
myserver.com/SuperGroup/SubGroup/SubGroupProject
The tree of the following project is a top-level txt file and a txt file within a directory. I get the tree from the GitLab API with:
myserver.com/api/v4/projects/1/repository/tree?recursive=true
[{"id":"aba61143388f605d3fe9de9033ecb4575e4d9b69","name":"myDirectory","type":"tree","path":"myDirectory","mode":"040000"},{"id":"0e3a2b246ab92abac101d0eb2e96b57e2d24915d","name":"1stLevelFile.txt","type":"blob","path":"myDirectory/1stLevelFile.txt","mode":"100644"},{"id":"3501682ba833c3e50addab55e42488e98200b323","name":"top_level.txt","type":"blob","path":"top_level.txt","mode":"100644"}]
If I request the contents for top_level.txt they are returned without any issue via:
myserver.com/api/v4/projects/1/repository/files/top_level.txt?ref=master
However I am unable to access myDirectory/1stLevelFile.txt with any API call I try. E.g.:
myserver.com/api/v4/projects/1/repository/files/"myDirectory%2F1stLevelFile.txt"?ref=master
and,
myserver.com/api/v4/projects/1/repository/files/"myDirectory%2F1stLevelFile%2Etxt"?ref=master
Results in:
Not Found The requested URL /api/v4/projects/1/repository/files/myDirectory/1stLevelFile.txt was not found on this server.
Apache/2.4.25 (Debian) Server at myserver.com Port 443
myserver.com/api/v4/projects/1/repository/files/"myDirectory/1stLevelFile.txt"?ref=master and,
myserver.com/api/v4/projects/1/repository/files?ref=master&path=myDirectory%2F1stLevelFile.txt
Results in:
error "404 Not Found"
The versions of the components are:
GitLab 10.6.3-ee
GitLab Shell 6.0.4
GitLab Workhorse v4.0.0
GitLab API v4
Ruby 2.3.6p384
Rails 4.2.10
postgresql 9.6.8
According to my research there was a similar bug which was fixed with the 10.0.0 update.
I also added my ssh-key although I doubt it has any effect, following this advice with the same issue in php.
Solution:
I eventually solved it by adjusting the apache installed on the server.
Just follow these instructions: https://gitlab.com/gitlab-org/gitlab-ce/issues/35079#note_76374269
According to your code, I will go thinking you use curl.
If it is the case, why are you adding double quotes to your file path ?
The doc do not contains it.
Can you test it like that please ?
curl --request GET --header 'PRIVATE-TOKEN: XXXXXXXXX' myserver.com/api/v4/projects/1/repository/files/myDirectory%2F1stLevelFile%2Etxt?ref=master
I'm trying to host and distribute xbmc addon on my site. I've made a repository which points to the directory where the addon zip file is. At the same folder I have an xml which describes the addon and so the addon name and description are being recognized by xbmc.
However when trying to install the addon it shows 0% downloading progress and then the progress disappears - resulting in the following error inside xbmc.log file:
ERROR: CCurlFile::FillBuffer - Failed: HTTP response code said error(22)
according to curl errors page, this happens when -
CURLE_HTTP_RETURNED_ERROR (22)
This is returned if CURLOPT_FAILONERROR is set TRUE and the HTTP
server returns an error code that is >= 400.
by that I assume the error may be caused by a misconfigured access permissions (perhaps I need to change some htaccess configuration?).
please help
I solved this on my own eventually. Apparently, the file structure was wrong - I needed to follow the file structure as mentioned in section 4.3 here in order for it to work