getting info from an MS SQL .bak file - node.js

I am writing an Electron app that, among many other things, restores an unknown .bak file to a MS SQL server and then extracts more information. In order to do this successfully, I need to extract some info from that .bak file programmatically (so SSMS cannot be used). I will be using sqlcmd, since that can be run by Electron's node.js backend. Unfortunately, I have a bit of a chicken and egg problem, because it seems I cannot restore a .bak file without knowing things about the paths for the .mdf files specified within the .bak file (that cannot be found without first restoring it). There is a RESTORE WITH MOVE option, though this seems to also require knowledge of the paths inside the .bak, which cannot be determined from the .bak itself. How might I get this information, or is it impossible?

Read about RESTORE FILELISTONLY.
At this link you'll find further statements one can use together with RESTORE in order to fetch meta data.
The returned resultset of FILELISTONLY will give you the LogicalName, the file's type (Data or Log), information about the file group and much more.
The other statements provide other meta data. Just check it out...

Related

Copy Data activity in ADF creates strange temporary filenames

I am using a Copy Data activity to upload the contents of a database table as .csv files to an SFTP server using a self-hosted Integration Runtime.
The "owners" of the FTP site have pointed out that they are seeing "strange" filenames, i.e. a guid appended to the designated filename. When I look at the uploaded files, however, that suffix is gone.
It appears therefore that the Copy Data activity
(a) creates the file with a guid in the name,
(b) streams the content into the file, and
(c) renames the file at the end.
Can somebody confirm or deny this? Has anybody else seen this behaviour?
(The problem with this is obviously that step (a) triggers some processing in another system, which is a problem for me).
Any help is greatly appreciated.
Thank, Martin
The behavior you are describing indicates you have "Upload with temp file" checked (which I believe is the default) in your SFTP source:
You may uncheck this box if you don't want it or if the server doesn't support it.

Azure Data Factory - Recording file name when reading all files in folder from Azure Blob Storage

I have a set of CSV files stored in Azure Blob Storage. I am reading the files into a database table using the Copy Data task. The Source is set as the folder where the files reside, so it's grabbing it's file and loading it into the database. The issue is that I can't seem to map the file name in order to read it into a column. I'm sure there are more complicated ways to do it, for instance first reading the metadata and then read the files using a loop, but surely the file metadata should be available to use while traversing through the files?
Thanks
This is not possible in a regular copy activity. Mapping Data Flows has this possibility, it's still in preview, but maybe it can help you out. If you check the documentation, you find an option to specify a column to store file name.
It looks like this:

APEX: Read uploaded excel file with as_read_xlsx

As far as I understand, APEX 5.1 does not support Excel files to be loaded into tables.
I found this package that seems to make it possible to SELECT from Excel files, but it does not show how to use it with, for example, files loaded via the "File Browse" Item.
Now, I am very new to this environment, so please explain it from the beginning.
What I did is I upload the package script to the SQL workshop and executed it, without errors. But now?
APEX 5.1 doesn't support it out of the box, but you can use the EXCEL2COLLECTION plugin (available here).
It is very straightforward, just create a file browse page item with an upload button which calls an onsubmit process (e.g. CreateCollection) of type Excel2Collection[Plug In] - specify the file browse item, a collection name and the CSV separator, then you can do as you please with the data (e.g. you may want to run some validations on the data then insert it into a table where you can access it as normal).

Retrieve contents of a ZIP file on SharePoint without downloading it

I have written a bit of automated code that checks a SharePoint site and looks for a ZIP file (lets call it doc.zip). If doc.zip is found, it downloads it, and then checks for a file (say target.docx). doc.zip is about 300MB, and so I want to only download where necessary.
What I would like to know is that given SharePoint has some ZIP search capability, is it possible to write code using CSOM (c#) to find doc.zip, and then run some code to retrieve the contents of doc.zip without downloading it.
Just to re-iterate, I am comfortable with searching for files in a folder on SP, downloading the file, and unpacking zip entries. What I need is to retrieve a ZIP files content on SP without downloading it.
E.g. is there a SP command:
cxt.Load(SomeZipFileQuery);
cxt.ExecuteQuery();
Thanks in advance.
This capability is not available. I do like the idea. Having the ability to "parse" zip files on the server side and then download the relevant bits would be ideal. Perhaps raise this on uservoice to see if others also find this us https://sharepoint.uservoice.com
Ok, I have proven yet again that stubbornness will prevail.
I have figured out that if I use the /_api/search?query='myfile.zip' web REST API to search for my file, this search will also match ZIP files that contain the file I need. And it works perfectly.
Of course there is added (pain) of parsing an XML response, but it works very nicely for my code example.
At least if someone is looking for this solution here it is. I wont bore anyone with code, as the /_api/search has probably been done to death already on other threads.

GridFS: Clean out all unreferenced files

I have just moved towards storing things in my GridFS in MongoDB. During testing, I noticed many files are being created but not deleted properly. I have a collection users, which has a field avatar. It contains the ObjectId of the file.
Now I'd like to have some command I could use to remove all the files and chunks that are not referenced there. Is it possible to do that with one query? Maybe 'map-reduce'?
Also I am not sure how to properly delete GridFS-Files in node-mongodb-native properly.
? Now I'd like to have some command I could use to remove all the files and chunks that are not referenced there.
Key terms here is "referenced". MongoDB does not have any joins and therefore, it does not have concept of "references".
Maybe 'map-reduce'?
Map / Reduce is a query tool, not a data modification tool. The same is true of the newer "Aggregration Framework".
What you will have to do is loop through your files and check the references for each one individually. You will then be able to delete those files.
Take a look at some documented examples on how to issue those deletions.

Resources