Setting Metadata in Google Cloud Storage (Export from BigQuery) - file-permissions

I am trying to update the metadata (programatically, from Python) of several CSV/JSON files that are exported from BigQuery. The application that exports the data is the same with the one modifying the files (thus using the same server certificate). The export goes all well, that is until I try to use the objects.patch() method to set the metadata I want. The problem is that I keep getting the following error:
apiclient.errors.HttpError: <HttpError 403 when requesting https://www.googleapis.com/storage/v1/b/<bucket>/<file>?alt=json returned "Forbidden">
Obviously, this has something to do with bucket or file permissions, but I can't manage to get around it. How come if the same certificate is being used in writing files and updating file metadata, i'm unable to update it? The bucket is created with the same certificate.

If that's the exact URL you're using, it's a URL problem: you're missing the /o/ between the bucket name and the object name.

Related

How to read and write data in spark via an S3 access point

I am attempting to use an S3 access point to store data in an S3 bucket. I have tried saving as I would if I had access to the bucket directly:
someDF.write.format("csv").option("header","true").mode("Overwrite")
.save("arn:aws:s3:us-east-1:000000000000:accesspoint/access-point/prefix/")
This returns the error
IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: "arn:aws:s3:us-east-1:000000000000:accesspoint/access-point/prefix/"
I havnt been able to find any documentation on how to do this. Are access points not supported? Is there a way to set up the access point as a custom data source?
Thank you
The problem is that you have provided the arn instead of the s3 URL. The URL would be something like this (assuming accesspoint is the bucket name):
s3://accesspoint/access-point/prefix/
There is a button in the AWS console if you are in the object or prefix, top right Copy S3 URL

Can we read files from server path using any fs method in NodeJs

In my case I need to read file/icon.png from cloud storage/bucket which is a token base URL/path. Token resides in header of request.
I tried to use fs.readFile('serverpath') but it gave back error as 'ENOENT' i.e. 'No such file or directory' is existed, but file is existed on that path. So are these methods are eligible to make calls and read files from server or they work only with static path, if that is so then in my case how to read file from cloud bucket/server.
Here i need to pass that file-path to UI, to show this icon.
Use this lib to handle GCS operations.
https://www.npmjs.com/package/#google-cloud/storage
If you do need use fs, install https://cloud.google.com/storage/docs/gcs-fuse, mount bucket to your local filesystem, then use fs as you normally would.
I would like to complement Cloud Ace's answer by saying that if you have Storage Object Admin permission you can make the URL of the image public and use it like any other public URL.
If you don't want to make the URL public you can get temporary access to the file by creating a signed URL.
Otherwise, you'll have to download the file using the GCS Node.js Client.
I posted this as an answer as it is quite long to be a comment.

Amazon S3 403 Forbidden Error for KML files but not JPG files

I am able to successfully upload (put object) jpg files to S3 with a particular code path, but receive a 403 forbidden error when using the same code path to upload a KML file. I am not restricting file types explicitly with "bucket policy," but feel that this must somehow be tied to bucket policy or CORS configuration.
I was using code based off the Heroku tutorial for uploading images to Amazon S3. The issue ended up being that the '+' symbol in the appropriate mime type is "application/vnd.google-earth.kml+xml" and the + symbol was being replaced with a space when fetching the file-type query parameter for our own S3 endpoint to generate signed requests. We were able to quickly fix this by just forcing the ContentType to be "application/vnd.google-earth.kml+xml" for all kml files going to our endpoint for generating signed S3 requests.

How can I access the parameters of a service on a Carbon server in plain txt

What I've done is broken the default 'Version' service on my WSO2 DSS, I tried to set the Scopes variable for WS-Discovery and didn't put a closing tag/element when creating the parameter.
Now when I try to access the parameters screen I get an xml Parse error
TID: [0] [WSO2 Data Services Server] [2012-08-22 12:38:04,404] ERROR {org.wso2.carbon.service.mgt.ServiceAdmin} - Error occured while getting parameters of service : Version
{org.wso2.carbon.service.mgt.ServiceAdmin}org.apache.axiom.om.OMException: com.ctc.wstx.exc.WstxUnexpectedCharException: Unexpected character '<' (code 60) in end tag Expected '>'. at [row,col {unknown-source}]: [2,58] at org.apache.axiom.om.impl.builder.StAXOMBuilder.next(StAXOMBuilder.java:296) at
I'm assuming this is stored in the H2 database, I've tried looking for the parameter in the .db file using notepad but I can't find it.
Is there another way to connect/browse the H2 db?
I've scanned through the repository, database and conf directories for clues without success.
UPDATE:
Yes you can connect to the H2 db using the included database Explorer under the Tools menu.
Use the connection details found in the repository/conf/registry.xml file
Then you can do SQL queries on it - (I haven't found the answer yet though)
UPDATE 2:
I don't think the parameters are held in the H2 db, but I managed to fix my problem by:
downloading the Version.aar file using the link on the list services page
deleting the Version service
Copying the Version.aar file into the repository/deployment/server/axis2services dir
I guess deleting the service removed any records/references to my broken parameter
I believe you've tried setting service parameters via the UI? Usually the service parameters you specify via the UI do not get saved in the services.xml of the original axis2 service archive. Instead, they get saved in the registry that is shipped with DSS and get applied to the service at runtime. But if you specify a malformed parameter then wouldn't be saved in the registry instead, throwing an exception while trying to engage that parameter. So there'll be no record saved corresponding to that kind of malformed parameters.
Hope this helps!
Cheers,
Prabath

With AWS S3 MultiPart upload to a named directory using C# and the .Net SDK

The following fails with this error message:
"The specified upload does not exist. The upload ID may be invalid, or the upload may have been aborted or completed."
UploadPartRequest uploadRequest = new UploadPartRequest()
.WithBucketName(IniValues.Instance.TargetBucketName)
.WithKey("junk/20070125.log")
.WithUploadId(initResponse.UploadId)
.WithPartNumber(i)
.WithPartSize(partSize)
.WithFilePosition(filePosition)
.WithFilePath("C:\\InetTemp\\Logs\\20070125.log");
The problem is with the ".WithKey("junk/20070125.log")". If I strip out the "junk/" it works perfectly.
So the question is, how to upload a file to a specific AWS directory? All the documentation I found shows tha correct way to be to prepend the directory name and a forward slash. What am I missing?
It turns out I was adding the folder name to the string after calling InitiateMultipartUploadRequest. Once I changed the key value to be consistent across the upload calls it began to work.

Resources