i have some kind of issue implanting unsigned uploaded images url recal, the way it is mentioned in this page:
http://cloudinary.com/documentation/java_image_upload
does not go well with the method i used to upload unsigned unsigned :
#Override
protected Void doInBackground(String... params) {
Map config = new HashMap();
config.put("cloud_name", "we4x4");
Cloudinary cloudinary = new Cloudinary(config);
try {
cloudinary.uploader().unsignedUpload((""+ RealFilePath), "frtkzlwz",
Cloudinary.asMap( "tags", UserID,"resource_type", "auto"));
} catch (IOException e) {
e.printStackTrace();
progressDialog.setMessage("Error uploading file");
progressDialog.hide();
}
return null;
}
could someone explain to me how and where do i write the code to get and address of the uploaded images ?
i am using android studio.
I was able to upload the a file, and recall its address using the following code, but when i try to substitute .upload with .unsignedUpload as i used before to upload without my full config, the syntax get underlined red ? tried several ways to patch it but not working ? i would appreciate some tips on the right syntax to achieve this ?
Cloudinary cloudinary = new Cloudinary(ObjectUtils.asMap(
"cloud_name", "we4x4",
"api_key", "xxxxxxxxxxxxx",
"api_secret", "xxxxxxxxxxxxxxxx"));
try{
Map result = cloudinary.uploader().upload("" + RealFilePath, ObjectUtils.asMap(
"tags", UserID));
uploadedContentURL = (String) result.get("url");
The unsigned_upload() method expects the following attributes: file, UploadPreset & options Map, unlike the upload() api that doesn't require the uploadPreset parameter.
However, both return a response from the server formed as JSONObject.
There you can find all the information required for generating the URL (e.g. public_id, format, version, etc.)
A code example is available here: https://github.com/cloudinary/cloudinary_java/blob/master/cloudinary-android-test/src/main/java/com/cloudinary/test/UploaderTest.java#L67
Related
problem: zip file with csv files generated from data seems to be corrupted after upload to Azure Blob Storage.
zip file before upload looks like this:
and everything works fine. That same zip file after upload is corrupted and looks like this:
During upload I use Azure Storage Blob client library for Java (v. 12.7.0, but I tried also previous versions). This is code I use (similar to example provided in SDK readme file):
public void uploadFileFromPath(String pathToFile, String blobName) {
BlobClient blobClient = blobContainerClient.getBlobClient(blobName);
blobClient.uploadFromFile(pathToFile);
}
And I get uploaded file:
When I download file directly from storage explorer, file is already corrupted.
What I'm doing wrong?
According to your description, I suggest you use the following method to upload you zip file
public void uploadFromFile(String filePath, ParallelTransferOptions parallelTransferOptions, BlobHttpHeaders headers, Map<String,String> metadata, AccessTier tier, BlobRequestConditions requestConditions, Duration timeout)
We can use the method to set content type
For example
BlobHttpHeaders headers = new BlobHttpHeaders()
.setContentType("application/x-zip-compressed");
Integer blockSize = 4* 1024 * 1024; // 4MB;
ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions(blockSize, null, null);
blobClient.uploadFromFile(pathToFile,parallelTransferOptions,headers,null, AccessTier.HOT, null, null);
For more details, please refer to the document
Eventually it was my fault. I didn't close ZipOutputStream before uploading file. It is not much of a problem when you use try with resources and just want to generate local file. But in my case I want to upload file to Blob Storage (still in the try section). File was incomplete (not closed) so it appeared on storage with corrupted data. This is what I should do from the very beginning.
private void addZipEntryAndDeleteTempCsvFile(String pathToFile, ZipOutputStream zipOut,
File file) throws IOException {
LOGGER.info("Adding zip entry: {}", pathToFile);
zipOut.putNextEntry(new ZipEntry(pathToFile));
try (FileInputStream fis = new FileInputStream(file)) {
byte[] bytes = new byte[1024];
int length;
while ((length = fis.read(bytes)) >= 0) {
zipOut.write(bytes, 0, length);
}
zipOut.closeEntry();
file.delete()
}
zipOut.close(); // missing part
}
After all, thank you for your help #JimXu. I really appreciate that.
I am trying to store the Url of Image from Firebase Storage to Real-time database, and then load the Url by Picasso library, but the image is stored but incorrect formate, all other images are loading but one child with name postimage is not loading because it's URL is not correct.
I only need to know that this code is correct with androidx because I am using androidx.
and rest of things are working properly something is wrong with this code.
Loading Image url from Storage to Realtime database Code:
#Override
public void onComplete(#NonNull Task<UploadTask.TaskSnapshot> task) {
if (task.isSuccessful())
{ filePath.getDownloadUrl().addOnSuccessListener(new OnSuccessListener<Uri>() {
#Override
public void onSuccess(Uri uri) {
final String downloadUrl = uri.toString();
postImageUri = downloadUrl;
}
});
PostImage = postImageUri;
// downloadUrl = task.getResult().getUploadSessionUri().toString();
// final String downloadUrl = filePath.getDownloadUrl().toString();
// postImageUri = downloadUrl;
// downloadUrl = task.getResult().getDownloadUrl().toString();
Toast.makeText(PostActivity.this, "Image Uploaded Successfully To Storage!", Toast.LENGTH_SHORT).show();
SavingPostInformationToDatabase( );
}
else
{
String message = task.getException().getMessage();
Toast.makeText(PostActivity.this, "Error! " + message, Toast.LENGTH_SHORT).show();
}
}
});```
I was facing this issue but later on, I solved... as you guys can see that I am using filePath.getDownload method and there I am getting image Uri to store in the realtime database through the variable postImagUri..so as long as the variable inside filePath.getDownload it has the image link properly but as far we use that variable outside that filePath.getDownload it gives a null value that is inappropriate.
So simply I cut my method SavingPostInformationToDatabase( ); and pasted it inside filePath.getDownload Cause the variable that stores image link I was using in SavingPostInformationToDatabase( ); method.
SavingPostInformationToDatabase( ); is a method that stores values to realtime database using HashMap method.
I've been running into issues when downloading Excel .xlsx files using Google Drive Api v3. The code I'm using is as follows (I'm using the .NET SDK):
using Google.Apis.Auth.OAuth2;
using Google.Apis.Drive.v3;
using Google.Apis.Services;
using Google.Apis.Util.Store;
using System;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
namespace DriveQuickstart
{
class Program
{
// If modifying these scopes, delete your previously saved credentials
// at ~/.credentials/drive-dotnet-quickstart.json
static string[] Scopes = { DriveService.Scope.Drive };
static string ApplicationName = "Drive API .NET Quickstart";
const string FileId = "my_file_id"; //put the ID of the Excel file you want to download here
public static void Main(string[] args)
{
Run().GetAwaiter().GetResult();
Console.Read();
}
private static async Task Run()
{
UserCredential credential;
using (var stream =
new FileStream("credentials.json", FileMode.Open, FileAccess.Read))
{
// The file token.json stores the user's access and refresh tokens, and is created
// automatically when the authorization flow completes for the first time.
string credPath = "token.json";
credential = GoogleWebAuthorizationBroker.AuthorizeAsync(
GoogleClientSecrets.Load(stream).Secrets,
Scopes,
"user",
CancellationToken.None,
new FileDataStore(credPath, true)).Result;
Console.WriteLine("Credential file saved to: " + credPath);
}
// Create Drive API service.
var service = new DriveService(new BaseClientService.Initializer()
{
HttpClientInitializer = credential,
ApplicationName = ApplicationName,
});
// Define parameters of request.
FilesResource.GetRequest getRequest = service.Files.Get(FileId);
using (var stream = new System.IO.FileStream("anExcelFile.xlsx", System.IO.FileMode.OpenOrCreate, System.IO.FileAccess.ReadWrite))
{
var downloadProgress = await getRequest.DownloadAsync(stream, CancellationToken.None);
if (downloadProgress.Exception != null)
{
Console.WriteLine(string.Format("We got error {0} {1} {2}", downloadProgress.Exception.Message, Environment.NewLine, downloadProgress.Exception.StackTrace));
}
else
{
Console.WriteLine("Download ok");
}
}
}
}
}
You can run this sample easily by following the steps described here. This works fine, however, as soon as someone opens the file with Google Sheets and modifies it, I start seeing the following error
D2020-03-16 02:10:13.647293 Response[00000007] Response status: InternalServerError 'Internal Server Error'
D2020-03-16 02:10:13.653278 Response[00000007] An abnormal response wasn't handled. Status code is InternalServerError
D2020-03-16 02:10:13.660288 Response[00000007] Abnormal response is being returned. Status Code is InternalServerError
E2020-03-16 02:10:13.667240 Exception occurred while downloading media The service drive has thrown an exception: Google.GoogleApiException: Internal Server Error
at Google.Apis.Download.MediaDownloader.<DownloadCoreAsync>d__31.MoveNext()
Looking at the file info after it was open with Google sheets, I can see that its size is changed to 0, so I tried to export it as you would for a Google spreadsheet, like so:
FilesResource.ExportRequest exportRequest = client.Files.Export(fileId, mimeType);
using (var stream = new System.IO.FileStream(fileName, System.IO.FileMode.OpenOrCreate, System.IO.FileAccess.ReadWrite))
{
await exportRequest.DownloadAsync(stream, cancellationToken);
}
With mimeType = "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"
However, I then run in the following error:
D2020-03-16 01:53:13.512928 Response[00000003] Response status: Forbidden 'Forbidden'
D2020-03-16 01:53:13.520906 Response[00000003] An abnormal response wasn't handled. Status code is Forbidden
D2020-03-16 01:53:13.525911 Response[00000003] Abnormal response is being returned. Status Code is Forbidden
E2020-03-16 01:53:13.538857 Exception occurred while downloading media The service drive has thrown an exception: Google.GoogleApiException: Google.Apis.Requests.RequestError
Export only supports Google Docs. [403]
Errors [
Message[Export only supports Google Docs.] Location[ - ] Reason[fileNotExportable] Domain[global]
]
at Google.Apis.Download.MediaDownloader.<DownloadCoreAsync>d__31.MoveNext()
So it seems that neither downloading nor exporting is working in this particular case. Anything else I should be trying ? Using the webContentLink (https://drive.google.com/uc?id=fileId&export=download) works fine (in a browser that is) so I guess it should be possible to download the file.
I raised the issue to Google and it seems it was fixed (cf this issue). I tried again today and following the steps described in the original question, I now can see that after the Excel file has been edited with Google sheets, its size is now greater than 0 and it can be downloaded.
Files that couldn't be downloaded because of this issue still appear to have the same problem but deleting/reuploading manually these files should make them downloadable.
We're currently going through upgrading and ran into an issue where we can no longer redirect a user to one of our files. We've tried all of the different redirect methods, but for example purposes this is the easiest to show. If you use the below action and feed it a file it does nothing. However, if you change the "false" to "true" (for forcedownload) it will indeed download the file. This reinforces what we have seen with the other redirect methods. The system will not let you redirect to a URL that has .ashx in the url.
Is this an intentional change or is this a bug? We tried both 19.205.0023 and 19.207.0026
Thanks for your help =)
public PXAction<UsrDesign> viewFile;
[PXUIField(DisplayName = "View File")]
[PXButton()]
protected virtual void ViewFile ()
{
string fileName = Design.Current.ProofFile; // whatever file you want, in our case it comes from custom screen
UploadFile uploadFile = PXSelect<UploadFile, Where<UploadFile.name, Equal<Required<UploadFile.name>>>>.Select(this, fileName);
UploadFileMaintenance fileGraph = PXGraph.CreateInstance<UploadFileMaintenance>();
var file = fileGraph.GetFile((Guid)uploadFile.FileID);
throw new PXRedirectToFileException(file, false);
}
I am using the MS Graph .net SDK. Attempting to copy a sharepoint document library to another sharepoint document library.
If the file is approximately 38mb, a GatewayTimeout exception is thrown for an unknown error.
Either MS has a bug, or I am doing something incorrectly. Here is my code:
HttpRequestMessage hrm = new HttpRequestMessage(HttpMethod.Post, request.RequestUrl);
hrm.Content = new StringContent(JsonConvert.SerializeObject(request.RequestBody), System.Text.Encoding.UTF8, "application/json");
await client.AuthenticationProvider.AuthenticateRequestAsync(hrm);
HttpResponseMessage response = await client.HttpProvider.SendAsync(hrm);
if (response.IsSuccessStatusCode)
{
var content = await response.Content.ReadAsStringAsync();
}
}
catch (Microsoft.Graph.ServiceException ex)
{
throw new Exception("Unknown Error");
}
Anyone see a problem here?
EDIT: Here is my revised code
public static async Task copyFile(Microsoft.Graph.GraphServiceClient client, string SourceDriveId, string SourceItemId, string DestinationDriveId, string DestinationFolderId, string FileName)
{
try
{
var destRef = new Microsoft.Graph.ItemReference()
{
DriveId = DestinationDriveId,
Id = DestinationFolderId
};
await client.Drives[SourceDriveId].Items[SourceItemId].Copy(null, destRef).Request().PostAsync();
//await client.Drives[SourceDriveId].Root.ItemWithPath(itemFileName).Copy(parentReference: dest).Request().PostAsync();
}
catch (Microsoft.Graph.ServiceException ex)
{
throw new Exception(ex.Message);
}
}
The above revised code continues to give the same error; however, tonight, it is also occurring on a 13.8mb file that previously had worked fine.
Logically, because the error doesn't occur for smaller files, I think it has something to do with file size.
The response is supposed to be a 202 with location header. See Copy Item in Graph Docs; however, I have never been able to obtain a location header. I suspect that Microsoft Graph is not getting the location header information from the OneDrive API and is therefore throwing a Gateway Timeout error.
I believe this is what you're looking for:
await graphClient.Drives["sourceDriveId"]
.Items["sourceItemId"]
.Copy(null, new ItemReference()
{
DriveId = "destinationDriveId",
Id = "destinationFolderId"
})
.Request()
.PostAsync();
This will take a given DriveItem and copy it to a folder in another Drive.