I've been requested to help a friend where he wants all uploaded documents to be converted to PDF and save the URL into a small database for later use.
Can anyone explain me what should I look or what method I need to override to create such workflow?
This needs to be available to all documents (even images).
Thank you.
Have a look at this post, it does exactly what you need although you may need to pull some tricks to write the URL into an external database. Writing it to a SharePoint list is easy.
Ping me if you need help.
It would be better to use third party API like one suggested by Muhimbi. to create your custom one you would have to create a event handler in which you can write code to convert your document or images to PDF. for PDF conversion you can use itextsharp
http://sourceforge.net/projects/itextsharp/
Related
I have an online store on a marketplace that provides a back-office which features a section that lists all my customers. The problem is that I can't export their contact, I have to copy/paste their info one by one which can be very time consuming. So I was wondering if there was a way to automate this task within my browser. Since their contact is a HTML list, I'd like to target the specific tags and export them into a compiled XML file.
Is this possible at all?
[EDIT]
Thank you all for your input but it seems out of my reach in term of knowledge. So I've decided to hire a freelancer to perform this task for me.
Selenium might be an option but you didn't provide to many details about the html. Does the page has a rest api? If yes maybe you can call it as json and parse it.
I was wondering if it is possible to open an Excel file (or any Office file) stored on an Azure Blob account within a browser or, better yet, embedded on a web page. Kind of like a preview function instead of always prompting the user to download the file. I know this could be easily done by storing the file in Sharepoint or OneDrive and using it's embed functionality but I'm trying to steer clear of those since we already implement the blob storage.
I've been searching but most results only lead me to Sharepoint/OneDrive.
Any help would be appreciated. :)
Edit (2014-07-14)
As per RGregg's suggestion below, I tried looking into creating a custom WOPI Host and I do think it would perfectly fit what I need. But I think I'm missing something. I cannot get the preview running. I am always getting a "Server not found" error. I tried replacing the old discovery file directed at owa1.wingtip.com with officeapps.live.com/hosting/discovery and it now goes as far as the loading image of Word Online but it gets stuck there. Couldn't really find other materials that expounds on how to make it work and it doesn't show any error whatsoever.
I also tried to create my own (in an attempt to simplify everything with just the mere basics) by implementing the GetFile and CheckFileInfo methods required. It sucessfully retrieves the file and the info but I still can't integrate it with the Web Apps. I think I'm missing a big chunk of something but I can't really figure it out. :(
I think it'd be easier to convert your backend over to Office 365 or OneDrive than to make your blob storage solution work with the Office apps, but I think what you would need to do is implement a WOPI host, like in this article: http://code.msdn.microsoft.com/office/Building-an-Office-Web-f98650d6. That would at least get you to a point where Excel Web App could load files from your blob storage.
I've just recently found out about Google Doc Preview. Basically, you'll just need an online URL of your document and appended it to:
https://docs.google.com/viewer?url=
and put that in an iframe. For wholeness:
<iframe src="http://docs.google.com/viewer?url=http://<blobServer>/<filename>&embedded=true" width="600" height="780" style="border: none;"></iframe>
It already provides some sort of a "Print Preview" on an IFrame so you have to keep in mind of pagination when creating the document for a prettier view. It also doesn't require you to have any google account to access it.
I still have some issues with it though:
Security. No required account = less security.
Doesn't render charts well. I had a pie chart and it appears as one whole solid circle.
Doesn't render filters at all thus...
Doesn't provide interactivity unlike OneDrive's embed.
But, this still answers the question so I'm posting it here for anyone looking for a solution. :)
Any answers are still welcome. :)
I'm trying to build a e-commerce site for a client and it isn't quite as straightforward as I had hoped. I am using Magento Community and my client has a wholesaler who provides a data feed (over 5000 products). In order to make this data feed compliant with Magento (and its attributes), I have edited some column headings in Excel and successfully uploaded as a CSV file.
My issue is that the wholesaler regularly renews the data feed automatically. When this happens, I am assuming my tweaking of the spreadsheet will be overruled, making my now Magento compatible CSV file useless again.
My question then is how can I make the wholesaler data feed compliant with my revised version so I don't have to continually rename elements? Is this possible?
I apologise if this sounds very stupid but I am more seasoned to static website builds.
Thank you in advance.
Does your webserver have access to the newly produced updated automatic data feed ?
IF so, just fetch the feed, modifty it to be complaint with the magento data feed then find the magento 'processfeed' or whatever function, and run it against the file ?
If your webserver cannot automatically fetch the file, can the pc making the feed automatically post the data to the webserver ?
IF neither are possible so far, then there will always be a manual person who has to 'reupload the feed'.. in which case, just make a simple page for them, 'Upload feed', or modify the magento page, and let them upload their standard feed, reformat it, then process the reformatted feed.
You would need to write a script to transform wholesaler data feed into magento format so you do not need to rename manually every time.
Or at least could anybody point me to docs about its crazy proprietary url parameters and html field name obfuscation? I can only suppose this is caused by SharePoint...
The main problem is, given a start page built with SharePoint, I can't recreate a form post with a programmative client because:
field names vary, they are appended with a some sort of id, hash, whatever (I think session.wise? Not sure)
tracing HTTP traffic on my side, I see the HTTP request is packed with strange parameters like __REQUESTDIGEST, __VIEWSTATE, and many others
Is this an intentional protection device put up by SharePoint? Which is the underlying architecture and which objects are involved (script callbacks, ... )?
(BTW, I'm not doing anything evil, just trying to extract public government data from a website).
Thanks.
SharePoint is nothing more than an ASP.NET Application, SharePoint completely Built on top of ASP.NET 2.0.
Being said that __VIEWSTATE is nothing but a Hidden Field that has the View State Information
Coming to __REQUESTDIGEST this is an Intentional Protection, this carries some sort of
securito validation which is called FormDigest
And finally to answer your Question, You will not be able to guess field and stuffs unless you have control to change the sourcecode of the application. Reason why the Name of the fields looks like obfuscated is because those controls are not handwritten but generated by the Code of ASP.NET Engine and parser, Reason field having such a name called Naming Container
One suggestion I would say is that, rather than trying to scraping the screen data, you can try alternate approaches, like each of the List in the SharePoint has the XML Feed inbuilt,try to consume it, if you have access to the site, try to retrieve the information using export to excel etc.
In addition to RSS, SharePoint also has a Web Services interface that you can use to get at and interact with data stored in SharePoint in a programatic way.
Background
My task is to, in SharePoint, show an image of a process map which should be clickable. Think of an imagemap in html. Some areas take you to other process map images and other brings up a pop-up window.
"Connected" to each process map is a set of documents. These documents are stored in a document library. There are one process map for each folder in the document library. The documents should be shown next to the image. The person clicking either the image or a folder to navigate in the hierarchy should also be able to upload, download and delete the documents.
Question
What would be the easiest solution for this?
My thoughts
... so far is to create a custom web part which I add above the document library browser (the default one in MOSS 2007). This web part reads some xml file pointing out the image to show and the areas which is to be clickable. It listens for some kind of events from the document library, like clicks on folders in the browser or it reads the current URL to know where in the folder hierarchy we are currently, and from that show the correct process map image. When the image is clicked, the web part updates the image and tells the document library to update accordingly.
Is this feasible? Am I on the wrong track? How do I communicate with a document library?
Thanks, Martin
My thoughts are that you create a web part that displays your image map and outputs(provider) the appropriate criteria to a another web part that consumes it and displays the files in a document library.
You can achieve this by creating your own custom webpart that displays a document library based on a CAML query. Each Images sends a different CAML query to the document library webpart.
I hope this helps. Please provide information on how you solved this problem if you have already done so.
Thanks
Long since I've been here... Actually solved this one.
We created two web parts, one for process navigation and one for filtering documents in the document library.
The web part for process navigation is actually just a web part that looks for a specific query parameter in the URL and adds ".html" to it. Then looks for that document in a document library. If found then this document is shown inside an iframe. Simple!
The html documents are produced by Visio and exported to html, then uploaded to SharePoint. The links in the Visio document drives the application with queries.
The web part that shows the corresponding documents also looks for a specific query in the URL then sends filterparameters to the document library through the IfilterProvider interface. I snatched this example IFilterProvider at MSDN and made it look in the URL for parameters and then made the controls invisible to the user.
Really simple solution, though the customer needs to put in a lot of work to incorporate their company processes into it. And it is somewhat error prone and probably a sucker to make changes to data-wise.