How to reset the cache in SharePoint Content Database? - sharepoint

I would like to clear the cache in the SharePoint Content database so it will read data from the local files in my server. I am trying to do this because I am performing some testing and this is part of our testing requirements. Is there anyway to do it?
As I searched, there are ways to reset the Configuration Database cache but I find nothing about clearing the Content cache so it will read data from the server's local files.

Here is an article about Flush the BLOB cache in SharePoint Server. A BLOB cache is a disk-based cache that stores binary large objects (BLOBs) such as frequently used image, audio, and video files, and other files that are used to display web pages. Hope this may help.

Related

Retrive pdf's link from google drive and upload it on mongodb?

I want to create a website to provide PDF notes to students.
I am planning to upload pdf on my googledrive and want to serve those pdf using my website.
I am using MongoDb As Database and my plan is to read link of pdf from my google drive and want to save it on collection of mongodb so that it will render on my website dynamically.
Is there any convenient way to do this automatically rather than copy pasting each link.(because it will be a time draining stuff and we have to upload 1000 of pdfs).
If this method is not possible Please suggest me some better way to do this work using nodejs and mongodb.
If using Google Drive is not a must, I recommend you to use MongoDB's GridFS. GridFS allows you to store documents that are higher than 16MB. In this way, you do not need to manage/update document links, because MongoDB will be already in charge of metadata and the document itself.
If your filesystem limits the number of files in a directory, you can use GridFS to store as many files as needed.
When you want to access information from portions of large files without having to load whole files into memory, you can use GridFS to recall sections of files without reading the entire file into memory.
When you want to keep your files and metadata automatically synced and deployed across a number of systems and facilities, you can use GridFS. When using geographically distributed replica sets, MongoDB can distribute files and their metadata automatically to a number of mongod instances and facilities.
This tutorial might be the starting point.

Storing MP4 Files in Cassandra?

I am currently considering whether I should be storing media in an apache cassandra database. The use case is that the site will be taking uploads from users for insurance claims and will need to store the files so that they cannot be accessed outside the correct permissions and at the same time they need to be able to be streamed. If I store them on a file system, I have to deal with redundancy backups and so on using file system based old tech. I am not really interested in dealing with a CDN because many of them are expensive but also I the permissions to the whether you can view the content depends on information in the app such as which adjuster is assigned to the case and so on. In addition I want to stream the files rather than require download and view which would be the default mode with requests against a CDN. If I put them in cassandra it will handle the replication, storage and I can stream the binary data out of the database to the user with integrated permissions. What I am concerned about is if I will run into problems with cassandra rows having huge HD video files that are sometimes 1 to 2 hours long (testimony).
I am interested in the recommendations of Cassandra users concerning this issue. How would to solve the problem. Any lessons you have learned that I can benefit from. Would you suggest anything specific about the video tables if I go with cassandra storage? Is there any CDN that will stream, not require download, allow me to plug in permissions and at the same time be open source?
Thanks a bunch.
Cassandra is definitely not designed and should not be used as an object store. I've worked on plenty of use cases where Cassandra was used as the metadata store alongside the object store/CDN and can complement them quite nicely.
Check out KillrVideo for inspiration: https://killrvideo.github.io/
This seems like a good key-value usecase for Streaming LOB support in Oracle NoSQL Database. You might want to look at this - http://docs.oracle.com/cd/NOSQL/html/GettingStartedGuide/lobapi.html

Suggestion for storing and accessing resource files on Azure website and web role

I've got a website that will need to access a file on the file system (or somewhere) containing some template text used to send an email. I'd like a suggestion for where to store the file and how to access / find the file at runtime for both azure web roles and azure web sites.
So far, I've read about Azure Local Storage, but that seems to only be an option for web roles, and not available for azure websites (I think?). Plus, I'm not sure how the file would make its way into the storage.
The other option I was thinking about was adding the file to the VS solution and marking it as content, in which case I believe it would be deployed with the other files. But in this case, I don't know how to get the path to access the file form the .NET code. Also, with this, I believe that I would need to redeploy the entire solution in order to update the file.
I would appreciate any thoughts on this. Thanks...
Using a non-local storage system is your best approach, it is highly unlikely your speed requirements will be that intense it will need intense performance improvements.
I would recommend blob storage in the same region as your website/cloud service.
If you have extreme loads and need that file loaded rapidly, then have an in-memory cache set to 5 minutes or something low to store the template. Each time it checks the cache, if its not there it loads in the cache from storage then provides the resource.
You may look at using cache if you are getting a constant 1 request per second or higher. Anything lower than that then just stick to reading on demand directly from the blob storage.
If you really want to get something locally off the disk then do
Server.MapPath("~/YourFolder/YourFile.ext")

Images in Web application

I am working on application in which users will upload huge number of images and i have to show those image webpage
What is the best way to store and retrieve images.
1) Database
2) FileSystem
3) CDN
4) JCR
or something else
What i know is
Database: saving and retrieving image from database will lead to lot of queries to database and will convert blob to file everytime. I think it will degrade the website performance
FileSystem: If i keep image information in database and image file in filesystem there will be sync issues. Like if i took a backup of the database we do have take the backup of images folder. ANd if there are millions of files it will consume lot of server resources
i read it here
http://akashkava.com/blog/127/huge-file-storage-in-database-instead-of-file-system/
Another options are CDNs and JCR
Please suggest the best option
Regards
Using the File System is only really an option if you only plan to deploy to one server (i.e. not several behind a load balancer), OR if all of your servers will have access to a shared File System. It may also be inefficient, unless you cache frequently-accessed files in the application server.
You're right that storing the actual binary data in a Database is perhaps overkill, and not what databases do best.
I'd suggest a combination:
A CDN (such as AWS CloudFront), backed by a publicly-accessible (but crucially publicly read-only) storage such as Amazon S3 would mean that your images are efficiently served, wherever the browsing user is located and cached appropriately in their browser (thus minimising bandwidth). S3 (or similar) means you have an API to upload and manage them from your application servers, without worrying about how all servers (and the outside world) will access them.
I'd suggest perhaps holding meta data about each image in a Database however. This means that you could assign each image a unique key (generated by your database), add extra info (file format, size, tags, author, etc), and also store the path to S3 (or similar) via the CDN as the publicly-accessible path to the image.
This combination of Database and shared publicly-accessible storage is probably a good mix, giving you the best of both worlds. The Database also means that if you need to move / change or bulk delete images in future (perhaps deleting all images uploaded by an author who is deleting their account), you can perform an efficient Database query to gather metadata, followed by updating / changing the stored images at the S3 locations the Database says they exist.
You say you want to display the images on a web page. This combination means that the application server can query the database efficiently for the image selection you want to show (including restricting by author, pagination, etc), then generate a view containing images referring to the correct CDN path. It means viewing the images is also quite efficient as you combine dynamic content (the page upon which the images are shown) with static content (the image themselves via the CDN).
CDNs may be a good option for you.
You can store the link to the images along with the image information in your database.

How to save images with Node.js

The further I go with my blog the more problems I hit :) Can anyone tell me what is the best way to save images. I store the data in mongodb, should I save images in there as well or should I use local file system? Thank you
use file system to store images, free up db resources to serve data. for bigger site images should really use CDN.
Store them in MongoDB using GridFS. That way you're not limited by file size, the images are easily shared between multiple app servers, and the images are naturally backed up with the rest of your MongoDB data.

Resources