SVG image Download URL Link is not working in FFImageLoading in Xamarin Forms [duplicate] - svg

What I want to do is to create a direct link URL to a mp3 file which is located on my Google Drive and use it on Audio object of HTML5, but I get 403 error.
I know that when one tries to create a link of a file located on Google Drive, it creates not a direct URL to the file but a URL for viewing the file through a viewer such as
https://drive.google.com/file/d/<file ID>/view?usp=sharing
I googled to find it is possible to rewrite it into a direct link URL like this:
https://drive.google.com/uc?id=<file ID>
I set this URL in the src property of my audio element. However, when I try play() method, the following error is thrown:
GET https://drive.google.com/uc?id=<file ID> 403
myProject.html:1 Uncaught (in promise) DOMException: Failed to load because no supported source was found.
So I tried to access to the URL https://drive.google.com/uc?id=<file ID> from my browser.
Then, I got this:
403. That’s an error.
We're sorry, but you do not have access to this page. That’s all we know.
I tried many times so it is not likely that I'm mis-pasting the <file ID>.
What should I do to create a valid direct link URL of the file?
I would appreciate for any information.
Progress
I got what was wrong.
The problem was that a file on Google Drive can be accessed only from the user who is authorized, which means only the owner of the Google account can access the file URL.
I tried to access it from Chrome Browser which is associated with the Google account, then, the error didn't occur.
However, I want to serve this file to all the people.
What should I do to give permission for access to other people?

You understand that Google drive is not a file hosting service right? This solution is not going to be very stable even if you do get it to work.
For it to work your going to need to set the file public so that everyone can access it. Then i would be willing to bet you will need an API key to do this in the long run.
Also remember that file id is not stable it can change in the future if for example you upload the file.

I solved this on my own. I right clicked the file, clicked Get link, and changed the authorization selection from Restricted into Anyone with the link. Then, the 403 error vanished for the access from anyone.

Related

Is it possible to have a link to raw content of file in Azure DevOps

It's possible to generate a link to raw content of the file in GitHub, is it possible to do with VSTS/DevOps?
Even after reading the existing answers, I still struggled with this a bit, so I wanted to leave a bit more of a thorough response.
As others have said, the pattern is (query split onto separate lines for ease of reading):
https://dev.azure.com/{{organization}}/{{project}}/_apis/sourceProviders/{{providerName}}/filecontents
?repository={{repository}}
&path={{path}}
&commitOrBranch={{commitOrBranch}}
&api-version=5.0-preview.1
But how do you find the values for these variables? If you go into your Azure DevOps, choose Repos > Files from the left navigation, and select a particular file, your current url should look something like this:
https://dev.azure.com/{{organization}}/{{project}}/_git/{{repository}}?path=%2Fpackage.json
You should use those values for organization, project, and repository. For path, you'll see an HTTP encoded version of the unix file path. %2F is the HTTP encoding for /, so that path is actually just /package.json (a tool like Postman will do that encoding for you).
Commit or branch is pretty self explanatory; you either know what you want for this value or you should use master. I have "hard-coded" the api version in the above url because that's what the documentation currently points to.
For the last variable, you need providerName. In short, you should probably use TfsGit. I got this value from looking through the list of source providers and looking for one with a value of true for supportedCapabilities.queryFileContents.
However, if you just request this URL you'll get a "203 Non-Authoritative Information" response back because you still need to authenticate yourself. Referring again to the same documentation, it says to use Basic auth with any value for the username and a personal access token for the password. You can create a personal access token at https://dev.azure.com/{{organization}}/_usersSettings/tokens; ensure that it has the Token Administration - Read & Manage permission.
If you're unfamiliar with this sort of thing, again Postman is super helpful with getting these requests working before you get into the code.
So if you have a repository with a src directory at the root, and you're trying to get the file contents of src/package.json, your URL should look something like:
https://dev.azure.com/{{organization}}/{{project}}/_apis/sourceProviders/TfsGit/filecontents?repository={{repository}}&commitOrBranch=master&api-version={{api-version}}&path=src%2Fpackage.json
And don't forget the basic auth!
Sure, here's the rests call needed:
GET https://feeds.dev.azure.com/{organization}/_apis/packaging/Feeds/{feedId}/packages/{packageId}?includeAllVersions={includeAllVersions}&includeUrls={includeUrls}&isListed={isListed}&isRelease={isRelease}&includeDeleted={includeDeleted}&includeDescription={includeDescription}&api-version=5.0-preview.1
https://learn.microsoft.com/en-us/rest/api/azure/devops/artifacts/artifact%20%20details/get%20package?view=azure-devops-rest-5.0#package
I was able to get the raw contents of a file using this URL.
GET https://dev.azure.com/{organization}/{project}/_apis/sourceProviders/{providerName}/filecontents?serviceEndpointId={serviceEndpointId}&repository={repository}&commitOrBranch={commitOrBranch}&path={path}&api-version=5.0-preview.1
I got this from here.
https://learn.microsoft.com/en-us/rest/api/azure/devops/build/source%20providers/get%20file%20contents?view=azure-devops-rest-5.0
You can obtain the raw URL using chrome.
Turn on Developer tools and view the Network tab.
Navigate to view the required file in the DevOps portal (Content panel). Once the content view is visible check the network tab again and find the URL which starts with "Items?Path", this is json response which contains the required "url:" element.
Drag the filename from the attachments windows and drop it in to any other MS application to get the raw URL or linked filename.
Most answers address this well, but in context of a public repo with anonymous access the api is different. Here is the one that works in such a scenario:
https://dev.azure.com/{{your_user_name}}/{{project_name}}/_apis/git/repositories/{{repo_name_encoded}}/items?scopePath={{path_to_your_file}}&api-version=6.0
This is the exact equivalent of the "raw" url provided by Github.
Another way that may be helpful if you want to quickly get the raw URL for a specific file that you are browsing:
install the browser extension named "Undisposition"
from the dot menu (top right) choose "Download": the file will open in a new browser tab from which you can copy the URL
(edit: unfortunately this will only work for file types that the browser knows how to open, otherwise it will still offer to download it...)
I am fairly new to this and had an issue accessing a raw file in an Azure DevOps Repo. It's straightforward in Github.
I wanted to download a file in CMD and BASH using Curl.
First I browsed to the file contents in the browser make a note of the bold sections:
https://dev.azure.com/**myOrg**/_git/**myProjectName**?path=%2F**MyFileName.ps1**
I then constructed the URL similar to what #Zach posted above.
https://dev.azure.com/**myOrg**/**myProjectName**/_apis/sourceProviders/TfsGit/filecontents?repository=**myProjectName**&commitOrBranch=**master**&api-version=5.0-preview.1&path=%2F**MyFileName.ps1**
Now when I paste the above URL in the browser it displays the content in RAW form similar to GitHub.
The difference was I had to setup a PAT (Personal Access Token) in My Azure DevOps account then authenticate the URL in DOS/BASH example below:
curl -u "<username>:<password>" "https://dev.azure.com/myOrg/myProjectName/_apis/sourceProviders/TfsGit/filecontents?repository=myProjectName&commitOrBranch=master&api-version=5.0-preview.1&path=%2FMyFileName.ps1" -# -L -o MyFileName.ps1

Image inaccessible from python script but accessible in browser

I was trying to automatically download some traffic camera photos to play around with some image object recognition scripts and I have found that some links to them will throw a 403: Forbidden error when I try to download them from Python, and yet I can access them in a browser. One such image is at this link: https://www.svz-bw.de/kamera/ftpdata/KA101/KA101_gross.jpg
This code:
urllib.request.urlretrieve("https://www.svz-bw.de/kamera/ftpdata/KA101/KA101_gross.jpg", "traffic.jpg")
Returns a 403 error for me. What gives? I can understand that perhaps these organizations are not keen on having people bog down their servers with automatic downloads and perhaps there are some GDPR-related constraints, but I am actually more curious about how they are able to detect that the request is coming from a script and not from normal use

Retrieve BLOGS_UPLOADED_IMAGES in java

I have some java code that retrieves blogs through the REST API's. I am not using the social business toolkit, but we have our own framework for that.
The application works perfectly on an on-premise connections environment and has worked on multiple versions.
However when switching to Connections Cloud, some parts stopped worked.
We get a 403 - Forbidden exception on 2 occasions:
Getting the details of a blog post: /blogs/[blog-id]/feed/entry/atom?entryid=[entry-id]
Getting images inside the blog post: /blogs/[blog-id]/resource/BLOGS_UPLOADED_IMAGES/[image file name]
I have fixed issue 1) by switching to the plublishing API: /blogs/[blog-id]/api/entries/[entry-id].
I cannot find a way to fix issue 2). I have also found 2 other image urls:
https://apps.ce.collabserv.com/blogs/[blog-id]/api/media/[file-name]
https://apps.ce.collabserv.com/blogs/[blog-id]/api/media/BLOGS_UPLOADED_IMAGES/[file-name].media
Both return:
<sp_0:error xmlns="http://incubator.apache.org/abdera" xmlns:sp_0="http://incubator.apache.org/abdera">
<code>404</code>
<message>Not Found</message>
</sp_0:error>
I want to authenticate by using Basic Authentication when possible. This does not appear to work with the given 403 urls.
My guess is that this the basic authentication header is not picked up. I have seen this before.
I used to fix this by first calling another URL that does support basic authentication and using the Ltpa cookies to authenticate the image url.
This also does not work: I do get LtpaTokens, but when I pass all the cookies to the URL, the image still does not work.
I prefer not to use OAuth of OAuth 2 at this moment. Is there any other way to fix this?
Anybody else managed to retrieve BLOGS_UPLOADED_IMAGES?
The issue is can also be reproduced in a browser.
Make sure you are not yet authenticated and the blog has posts with
images
Go to /blogs/[blog-id]/api/media
Authenticate using the popup in the browser The Atom feed now appears. This contains the images of your blog.
403 when opening:
/blogs/[blog-id]/resource/BLOGS_UPLOADED_IMAGES/[image]
404 xml when opening: /blogs/[blog-id]/api/media/* links

Getting chromecast sample code to run

I have found no joy in getting any of the sample programs for Chrome to connect to my Chromecasts.
The Chromecasts have been registered and I am able to browse to their IP address port 9222 successfully.
Both the Chrome browser and Beta extension are up to date.
I have tried the CastHelloVideo-Chrome, CastMedia-Chrome and Cast-Tictactoe-Chrome and all fail to connect. The developer console shows a pair of errors:
GET chrome-extension://boadgeojelhgndaghljhdicfkmllpafd/cast_sender.js net::ERR_FAILED
and
Failed to execute 'postMessage' on 'DOMWindow': The target origin provided ('file://') does not match the recipient window's origin ('null').
When testing our own code we get an error when calling requestSession but the message returned by Chrome.cast.Error is useless since the function and variable names have been obscured.
I also have a difficult time testing the examples for Chrome. I decided to use Chrome to test because debugging JavaScript is so much quicker than going directly to Android. I spent hours trying to figure out why I keep getting the error message GET chrome-extension://boadgeojelhgndaghljhdicfkmllpafd/cast_sender.js net::ERR_FAILED and Failed to execute 'postMessage' on 'DOMWindow': The target origin provided ('file://') does not match the recipient window's origin ('null') if I run it from my PC but not when I run it from an example website at http://www.videws.com/eureka/helloVideos/ provided by one of the Cast developers at Google. I keep reading/trying different combinations from his readme note in the example until it dawned on me what he meant by "Put all files on your own server" instead of "computer".
I created a public weblink on my Google Drive, make the folder public and copied all the files there. When go to Google drive on the web, preview the example (index.html), the example runs beautifully. I tried tic-tac-toe. It also runs.
So the answer is you need to run it off a website -- not from a local file in your computer (ctrl-O in Chrome)
I hope this will help you going with Cast.
Danh
I was finally able to get connected, but from Android. Many steps will be the same though.
I tested this: https://github.com/googlecast/CastHelloText-android It let's you speak into the phone, and what you say appears on the TV/Chromecast. I didn't install the formal sender app, but I was able to load the TicTac Toe from the receiver as well. So i have seen them both on my CC.
I couldn't connect until I properly setup the RECEIVER APPLICATION. You didn't mention it.
What I did from where I think you are at.. I just double checked my receiver app settings.
File copy the receiver.hml file provided in the sample sender app. Place it in a public dropbox folder. copy that public link to my clipboard.
Go back to where you registered your Chromecast device(s). https://cast.google.com/publish -Add An Application. I called mine : ReceiverSimple
Edit the app you just created, and for the URL field: paste in that public link. For you, set the platform to Chrome. It did not seem to matter whether or not i included the package name, so try leaving it blank.
Save It. Now COPY to clipboard the ApplicationID for the receiver you just created.
Open the provided sender app source code, and find where it's using APP_ID (hopefully R.Strings or equivalent in chrome ) Paste that App ID in. That will tell your client to use your receiver app, (and therefore, load that receiver.html file into the chrome cast screen).
also try a chrome cast reboot as another means of sanity checking.
I think you're close.

Security Sandbox Violation - loading filesystem and networking SWF files

I have built my entire website with Flash and embedded several swf objects (slideshows) into it. Everything works fine when I publish it as swf movie, but now that I want to upload my website, an error message occurs saying:
Error #2044: Unhandled SecurityErrorEvent:. text=Error #2140: Security sandbox violation: file:///mylayout.swf cannot load file:///slideshow_1.swf. Local-with-filesystem and local-with-networking SWF files cannot load each other.
I know that it has something to do with the fact that one of the swf files is local to the filesystem and the other local via networking, but in my publish settings, I told it to access local files only. That didn't help.
I am hosting my website at www.all-inkl.com; besides that, I have not uploaded it yet; I'm just testing it offline. I know I should add this code somewhere:
<allow-access-from domain="localhost" secure="true"/>`
but I'm not sure where to add it. Maybe to my timeline?
The crossdomain.xml file should have your server name specified. For example take a look at http://www.msn.com/crossdomain.xml
You will have to specify the domain names there. Your server should also be having a crossdomain.xml. Add the corresponding server name there. For example if you are using localhost, try adding
<allow-access-from domain="localhost" secure="true"/>
check your swf loading paths. try to specify the entire path like "hppt://www.yourdomain.com/yourweb/mylayout.swf" like this for every swf. and set the html code where the swf embed tag for allowScriptAccess : "always"

Resources