Is there an API in Windows 8.1 to check the status of a OneDrive file? I would like to know whether the file has been replicated to the local disk or it is just a dummy representation of a file in the cloud that doesn't take up local disk space
This information can be retrieved by calling
StorageFile.Properties.RetrievePropertiesAsync() (http://msdn.microsoft.com/en-us/library/windows/apps/hh770652.aspx)
method and passing it "System.OfflineAvailability" (http://msdn.microsoft.com/en-us/library/windows/desktop/bb787532(v=vs.85).aspx)
in the list of properties to be retrieved.
Function will return a dictionary that will contain one of the 3 possible values:
0 - not available offline
1 - available offline
2 - not applicatble (not a SkyDrive/OneDrive file)
You can use the StorageFile.IsAvailable property. See the following quickstart article:
http://msdn.microsoft.com/en-us/library/windows/apps/xaml/dn467360.aspx
EDIT: in fact, as #Ghostrider mentions in the comment, this answer is incorrect. #Ghostrider has found the correct solution
Related
I wanted to create a job where I need to consider the latest file available as input file.
File format is as below: FILE1.TEST.TYYMMDD
is there any way to identify latest file based on date present in file name via JCL.
P.S. GDG versions are not created in existing process . Only PS file is created.
Thank you
I wanted to create a job where I need to consider the latest file available as input file. File [name] format is as below: FILE1.TEST.TYYMMDD is there any way to identify latest file based on date present in file name via JCL.
No.
You indicate that GDGs are not created in the existing process. GDGs would be the best way to accomplish your goal. Absent GDGs, you must write code.
You could accomplish your goal by writing (C, clist, COBOL, PL/I, Rexx) code using the LMDINIT and LMDLIST ISPF services. Then you would execute your code by running ISPF in batch. Many mainframe shops have a cataloged procedure to execute ISPF in batch.
Agree with #cschneid that there is not a platform way to handle this. However, I want to point out that GDGs are the platform way of managing PS files for access in a relative form.
Your comment
GDG versions are not created in existing process . Only PS file is
created.
That statement didn't make sense to me. GDGs are not a file type like physical sequential (PS) or partitioned (PO). It's a convention to allow relative reference to files created over time which sounds like what you want. I've only seen the use of GDGs for PS files.
Putting the date in the file name can have its uses but to z/OS its only part of the filename and not meta information that it operates on (like G0000v00's in GDGs.
so I am trying to download an image from firebase storage like this using cloud function
const bucket = storage.bucket("myApp.appspot.com")
const filePath = `eventPoster/${creatorID}/${eventID}.png`
await bucket.file(filePath).download({
destination: tmpFilePath
})
but the problem is, I can't ensure the image is always in the png format, it can also be jpeg or jpg
I will get error like this if I hard code it like that
Error: No such object: myApp.appspot.com/eventPoster/user1/ev123.png
so how to check if a path is valid or not or how to make a code that can work dynamically with other image extension ?
Posting as Community Wiki, because I based part of it in a comment and so other members of the Community can feel free to edit it.
As mentioned in the comments, you can use the method exists(), to check if a specific file exists in your bucket. This way, you will not an issue, in case the file doesn't exist when you are trying to return it. This would be the best way for you to check only for files existing, with the name based in the ids as you are basing your name structure.
Besides that, as clarified in this other post from the Community here, you can restrict the file types in your bucket in case you are using the POST method to upload files, otherwise, you won't be able - which can also, be an alternative. In case you are not using POST, as mentioned as well in this other post, you can try to construct a Cloud Function that will check for the file type before it uploads the file to the bucket, which would make your application only upload specific files.
In case you would like to check on how to upload files to Cloud Storage using Cloud Functions, you can check this tutorial here: How to upload using firebase cloud functions
Unfortunately, these seem to be the only available options right now, but I think they are a good start for you.
Let me know if the information helped you!
You can use the exists method on a file to check if your PNG is present or not. You can also change your code and use the getFiles() method. As you can see, you can put options, and one named "prefix". In your case you can have this
bucket.getFiles({
prefix: "eventPoster/${creatorID}/${eventID}"
}, <callback>);
Here you will list all the file with the prefix: PNG, JPEG and others. make your code smarter to leverage this dynamic answer
I'm quite new to Data Factory and Logic Apps (but I am experienced with SSIS since many years),
I succeeded in loading a folder with 100 text-files into SQL-Azure with DATA FACTORY
But the files themselves are untouched
Now, another requirement is that I loop through the folders to get all files with a certain file extension,
In the end I should move (=copy & delete) all the files from the 'To_be_processed' folder to the 'Processed' folder
I can not find where to put 'wildcards' and such:
For example, get all files with file extensions .001, 002, 003, 004, 005, ...until... , 996, 997, 998, 999 (thousand files)
--> also searching in the subfolders.
Is it possible to call a Data Factory from within a Logic App ? (although this seems unnecessary)
Please find some more detailed information in this screenshot:
(click to enlarge)
Thanks in advance helping me out exploring this new technology!
Interesting situation.
I agree that using Logic Apps just for this additional layer of file handling seems unnecessary, but Azure Data Factory may currently be unable to deal with exactly what you need...
In terms of adding wild cards to your Azure Data Factory datasets you have 3 attributes available within the JSON type properties block, as follows.
Folder Path - to specify the directory. Which can work with a partition by clause for a time slice start and end. Required.
File Name - to specify the file. Which again can work with a partition by clause for a time slice start and end. Not required.
File Filter - this is where wildcards can be used for single and multiple characters. (*) for multi and (?) for single. Not required.
More info here: https://learn.microsoft.com/en-us/azure/data-factory/data-factory-onprem-file-system-connector
I have to say that separately none of the above are ideal for what you require and I've already fed back to Microsoft that we need a more flexible attribute that combines the 3 above values into 1, allowing wildcards in various places and a partition by condition that works with more than just date time values.
That said. Try something like the below.
"typeProperties": {
"folderPath": "TO_BE_PROCESSED",
"fileFilter": "17-SKO-??-MD1.*" //looks like 2 middle values in image above
}
On a side note; there is already a Microsoft feedback item thats been raised for a file move activity which is currently under review.
See here: https://feedback.azure.com/forums/270578-data-factory/suggestions/13427742-move-activity
Hope this helps
We have used a C# application which we call through 'app services' -> webjobs.
Much easier to iterate through folders. To call SQL we used sql bulkinsert
I'm trying to set X-Delete-After and X-Delete-At to a file i'm uploading.
So i tired :
FileMetaData.Add("X-Delete-After", "30");
cloudFilesProvider.UpdateObjectMetadata(inStrContainerID, strDesFileName, FileMetaData);
but the header did not get recognized.
is that the right approach?
Edit: I'm trying to use ICloudFilesMetadataProcessor.ProcessMetadata, but really have no clue how to and am not able to find any documentation.
In the current release of the SDK, you can include the X-Delete-After or X-Delete-At value in the headers argument to the following calls:
IObjectStorageProvider.CreateObject
IObjectStorageProvider.CreateObjectFromFile
Currently there is no way in the SDK to change the value of this header after the file has already been uploaded (e.g. using UpdateObjectMetadata as you suggest in the question would set the values X-Object-Meta-X-Delete-After or X-Object-Meta-X-Delete-After, which is not correct).
Here is a related issue on GitHub:
#167: How to assign version folder
Gopstar --
EDITED:
After more investigation; I set the X-Delete-After to 1500 and the code worked. Sort of. When viewing the file header information via the dashboard, the X-Delete-At was set.
However, the result was correct; the X-Delete-At was equal to what would be 1500 seconds from the time I set it.
Original reply:
I played around; if you set the value higher (for example, I tried X-Delete-After = 3000) it will work.
I do NOT know the lowest number acceptable, but I'm sure someone will chime in with the number.
Hope this give SOME help.
I'll explain the task requested from me:
I have two containers in Azure, one called "data" and one called "script". In the "data" container there's a txt file with data, and in the "script" container there's a script file.
Now, I need programatically (with WorkerRole) to execute the script file, with the content of the data file as parameters (Example: a script file that accepts a string 's' and returns to the screen "Hello, 's'", when 's' in the string given, and in the data file there's a string), and save the result of the run into another file which needs to be saved in another container called "result".
How do I do all these? I've already uploaded the files and created the blobs programatically, but I can't seem to understand how to execute the file of how to save its result to another file?
Can I please have some help?
Thanks in advance
Here are the steps in pseudo code:
Retrieve the script from the blob(using DownloadToStream())
Compile the script(I will leave this to you as I have no idea what
format your script is)
Load parameters from blob(same as step 1)
Execute script with those parameters.
If your script's can be written as lambda expressions then this becomes a lot easier as you can turn them into Action's
Edit based on your questiions:
DownloadText() is no longer included in Azure Storage 2.0, you only have access to DownloadToStream(). Even if you are using an older version(say 1.7) I would recommend using DownloadToStream() in the event you ever upgrade in the future. This will prevent having to refactor your code.
In terms of executing your script, depending on what type of script it is(if it is c# code you can use this example: Is it possible to dynamically compile and execute C# code fragments?. If you need to execute a different type of script you would need to run it using Process.Start and you can look at this example: http://www.dotnetperls.com/process-start
I do not have much experience with point number 2 but those are the processes I have heard and seen used.