Gitlab snippets file access - gitlab

How can we access a snippet file from a Gitlab (private) repository?
I can access and clone my (private) projects (using SSH) but the problem with the snippets is that the raw URL doesn’t return the filename, let me give you an example:
I create a new snippet:
Title: My new patch
Access: Public (let’s say public to be clearer)
File: my_patch.patch (filename)
<code>….</code>
When I see the raw of the file it returns a URL like https://gitlab.com/snippets/id/raw but I need something like https://gitlab.com/snippets/id/my_patch.patch
Is it possible?

I do not think that sharing a private snippet is supported at this moment.
Check and push these feature proposals for having this enhancement:
https://gitlab.com/gitlab-org/gitlab-ce/issues/22337
https://gitlab.com/gitlab-org/gitlab-ce/issues/24318

Related

Trying to use HttpClient.GetStreamAsync straight to the adls FileClient.UploadAsync

I have an Azure Function that will call an external API via HttpClient. The external API returns a JSON response. I want to save the response directly to an ADLS File.
My simplistic code is:
public async Task UploadFileBulk(Stream contentToUpload)
{
await this._theClient.FileClient.UploadAsync(contentToUpload);
}
The this._theClient is a simple wrapper class around the various Azure Data Lake classes such as DataLakeServiceClient, DataLakeFileSystemClient, DataLakeDirectoryClient, DataLakeFileClient.
I'm happy this wrapper calls works as I expect, I spin one up, set the service, filesystem, directory and then a filename to create. I've used this wrapper class to create directories etc. so it works as I expect.
I am calling the above method as follows:
await dlw.UploadFileBulk(await this._httpClient.GetStreamAsync("<endpoint>"));
I see the file getting created in the Lake directory with the name I want, however if I then download the file using Sorage Explorer and then try to open it in say VS Code it's not in a recognisable format (I can "force" code to open it but it looks like binary format to me).
If I sniff the traffic with fiddler I can see the content from the external API is JSON, content-type is application/json and the body shows in fiddler as JSON.
If I look at the calls to the ADLS endpoint I can see a PUT call followed by two PATCH calls.
The first PATCH call looks like it is the one sending the content, it has a content-header of application/octet-stream and the request body is the "binary looking content".
I am using HttpClient.GetStreamAsync as I don't want my Function to have to load the entire API payload into memory (some of the external API endpoints return very large files over 100mb). I am thinking I can "stream the response from the external API straight into ADLS".
Is there a way to change how the ADLS FileClient.UploadAsync(Stream stream) method works so I can tell it to upload the file as a JSON file with a content type of application/json?
EDIT:
So turns out the External API was sendng back zipped content and so once I added the following extra AutomaticDecompression code to my functions startup I got the files uploaded to ADLS as expected.
public override void Configure(IFunctionsHostBuilder builder)
{
builder.Services.AddHttpClient("default", client =>
{
client.DefaultRequestHeaders.Add("Accept-Encoding", "gzip, deflate");
}).ConfigurePrimaryHttpMessageHandler(() => new HttpClientHandler
{
AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate
});
}
#Gaurav Mantri has given me some pointers on if the pattern of "streaming from an output to an input" is actually correct, I will research this further.
Regarding the issue, please refer to the following code
var uploadOptions = new DataLakeFileUploadOptions();
uploadOptions.HttpHeaders = new PathHttpHeaders();
uploadOptions.HttpHeaders.ContentType ="application/json";
await fileClient.UploadAsync(stream, uploadOptions);

Renaming/moving a public file on Firebase Cloud Storage using Node.js

The documentation explains how to rename a file in Firebase Cloud Storage using Node.js. However, it turns out that after renaming a public file, it's not public anymore. Is it possible to make it public while moving it?
When uploading a file, it's possible to set the option predefinedAcl. Is there such an option in move()?
The API documentation for move() says that it is not an atomic operation. It's actually a combination of copy() and delete(). Given that implementation detail, and the lack of any alternatives in the API surface, it looks like your only option is to set the ACL on the destination file after you copy it with the SDK.

Can we read files from server path using any fs method in NodeJs

In my case I need to read file/icon.png from cloud storage/bucket which is a token base URL/path. Token resides in header of request.
I tried to use fs.readFile('serverpath') but it gave back error as 'ENOENT' i.e. 'No such file or directory' is existed, but file is existed on that path. So are these methods are eligible to make calls and read files from server or they work only with static path, if that is so then in my case how to read file from cloud bucket/server.
Here i need to pass that file-path to UI, to show this icon.
Use this lib to handle GCS operations.
https://www.npmjs.com/package/#google-cloud/storage
If you do need use fs, install https://cloud.google.com/storage/docs/gcs-fuse, mount bucket to your local filesystem, then use fs as you normally would.
I would like to complement Cloud Ace's answer by saying that if you have Storage Object Admin permission you can make the URL of the image public and use it like any other public URL.
If you don't want to make the URL public you can get temporary access to the file by creating a signed URL.
Otherwise, you'll have to download the file using the GCS Node.js Client.
I posted this as an answer as it is quite long to be a comment.

How to retrieve the content of a pull request patch file from a private repo?

I have a private repo and I need to get the patch file contents from a pull request.
I am using this nodejs api.
github.pullRequests.get(msg,function(err,p){
//[...]
console.log(p.patch_url); //I get the patch url something like: https://github.com/:user/:repo/pull/1.patch
//[...]
})
How can I get the content of that file either using the API or some other method (curl, etc)?
github.repos.getContent doesn't seem to help (or I might sending the wrong path for this file).
It seems that I missed this:
Alternative Response Formats
Pass the appropriate media type to fetch diff and patch formats.
from the GitHub API documentation.
Info regarding the format: Media:
application/vnd.github.VERSION.patch

Getting an object's links in Rackspace cloud files API

I am using the Java jclouds API for access to my Rackspace cloud files account.
I can create and list containers, and upload objects, but I can't figure out how to get the public links for an uploaded object. (I can see these public links from within the Rackspace control panel, by right-clicking on the object - there are 4 types: HTTP, HTTPS, Streaming, iOS Streaming).
The closest I can get is by using object.getInfo() to get the object's metadata. This includes a URI, but this doesn't resemble the public links I find from within the control panel.
Anyone know what I'm doing wrong?
I figured it out...
First, I should get the public URI of the object's container, not from the object.
Then I use a CloudFilesClient object. On the container I need to use getCDNMetadata("containername").getCDNUri()
Here is more information and some sample code to get the specific file CDN address.
For more details you can checkout the Java guide:
https://developer.rackspace.com/docs/cloud-files/quickstart/?lang=java
First get the cloud files api:
CloudFilesApi cloudFilesApi = ContextBuilder.newBuilder("rackspace-cloudfiles-us")
.credentials("{username}", "{apiKey}")
.buildApi(CloudFilesApi.class);
From there you can query the container:
CDNApi cdnApi = cloudFilesApi.getCDNApi("{region}");
CDNContainer cdnContainer = cdnApi.get("{containerName}");
Now with that CDNContainer you can get the specific web address that you need:
URI httpURI = cdnContainer.getUri();
URI httpsURI = cdnContainer.getSslUri();
This will get you the base URI for the container. Now to get the final address for your specific file you will need to append /{"your_file_name.extension"} to the end of the address. For example if my base URI was converted to a URL then to a String it may look like:
http://123456asdf-qwert987653.rackcdn.com/
From here I can get a file with the name example.mp4 with the following address:
http://123456asdf-qwert987653.rackcdn.com/example.mp4
This all assumes that you have already enabled CDN on the container.

Resources