`webClient.UploadFile("http://www.myurl.com/~/media/DCF92BB74CDA4D558EEF2D3C30216E30.ashx", #"E:\filesImage\Item.png");
I'm trying to upload images to sitecore using webclient.uploadfile() method by sending my sitecore address and the path of my local images.But I'm not able to upload it.I have to do this without any API's and Sitecore Instances.
The upload process would be the same as with any ASP.net application. However, once the file has been uploaded you need to create a media item programtically. You can do this from an actual file in the file system, or from a memory stream.
The process involves using a MediaCreator object and using its CreateFromFile method.
This blog post outlines the whole process:
Adding a file to the Sitecore Media Library programatically
If you're thinking simply about optimizing your developer workflow you could use the Sitecore PowerShell Extensions using the Remoting API as described in this this blog post
If you want to use web service way than you can use number of ways which are as follows:
a) Sitecore Rocks WebService (If you are allowed to install that or it is already available).
b) Sitecore Razl Service(It is third party which need license).
c) Sitecore Powershell Remoting (This needs Sitecore PowerShell extensions to be installed on Sitecore Server).
d) You can also use Sitecore Service which you can find under sitecore\shell\WebService\Service.asmx (But this is legacy of new SitecoreItemWebAPI)
e) Last is my enhanced SitecoreItemWebAPI (This also need SitecoreItemWebApi 1.2 as a pre-requisite).
But in end except option d you need to install some or other thing in order to upload the image using HTTP, you should also know the valid credentials to use any of above stated methods.
If your customers upload the image on the website, you need to create the item in your master database. (needs access and write right on the master database) depend on your security you might consider not build it with custom code.
But using the Sitecore webforms for marketers module With out of the box file upload. Create a form with upload field and using the WFFM webservices.
If you dont want to use Sitecore API, then you can do the following:
Write a code that uploads images into this folder : [root]/upload/
You might need to create folder structure that represent how the images are stored in Sitecore, eg: your images uploaded into [root]/upload/Import/ will be stored in /sitecore/media library/Import
Sitecore will automatically upload these images into Media library
Hope this helps
Option: You can use Item Web API for it. No reference to any Sitecore dll is needed. You will only need access to the host and be able to enable the Item Web API.
References:
Upload the files using it: http://www.sitecoreinsight.com/how-create-media-items-using-sitecore-item-web-api/
Enable Item Web Api: http://sdn.sitecore.net/upload/sdn5/modules/sitecore%20item%20web%20api/sitecore_item_web_api_developer_guide_sc66-71-a4.pdf#search=%22item%22
I guess that is pretty much what you need, but as Jay S mentioned, if you put more details on your question helps on finding the best option to your particular case.
private void CreateImageIteminSitecore()
{
filePath = #"C:\Sitecore\Website\ImageTemp\Pic.jpg;
using (new SecurityDisabler())
{
Database masterDb = Sitecore.Configuration.Factory.GetDatabase("master");
Sitecore.Resources.Media.MediaCreatorOptions options = new Sitecore.Resources.Media.MediaCreatorOptions();
options.FileBased = true;
options.AlternateText = Path.GetFileNameWithoutExtension(filePath);
options.Destination = "/sitecore/media library/Downloads/";
options.Database = masterDb;
options.Versioned = false; // Do not make a versioned template
options.KeepExisting = false;
Sitecore.Data.Items.MediaItem mediaitemImage = new Sitecore.Resources.Media.MediaCreator().CreateFromFile(filePath, options);
Item ImageItem = masterDb.GetItem(mediaitemImage.ID.ToString());
ImageItem.Editing.BeginEdit();
ImageItem.Name = Path.GetFileNameWithoutExtension(filePath);
ImageItem.Editing.EndEdit();
}
}
Related
In my application we are getting media like photos and Videos from third party and I want to store it in Hybris as a media ?
How can we upload Photos/Videos through the java code in a Hybris Media?
You can check my simple example. Not forget to add exception block. Check CatalogUnawareMedia and CatalogMedia before your implementation. If you are not planning to synchronize third-party objects use CatalogUnawareMedia.
// folder optional
final MediaFolderModel folder = mediaService.getFolder("myfolder");
final CatalogUnawareMediaModel media = getModelService().create(CatalogUnawareMediaModel.class); //or CatalogMediaModel
media.setCode(myFileName);
media.setFolder(folder);
getModelService().save(media);
mediaService.setStreamForMedia(media, myStream);
getModelService().save(media);
We are designing an Azure Website which will allow users to Upload content(MP4,Docx...MSOffice Files) which can then be accessed.
Some video content we will encode to provide several differing quality formats, before it will be streamed (using Azure Media Services).
We need to add an intermediate step so we can scan uploaded files for potential virus risk. Is there functionality built into azure (or third party) which will allow us to call an API to scan content before processing it? We are ideally looking for an API rather than just a background service on a VM, so we can get feedback potentially for use in a web or worker role.
Had a quick look at Symantec Endpoint and Windows Defender but not sure these offer an API
I have successfully done this using the open source ClamAV. You don't specify what languages you are using, but as it's Azure I'll assume .Net.
There is a .Net wrapper that should provide the API that you are looking for:
https://github.com/tekmaven/nClam
Here is some sample code (note: this is copied directly from the nClam GitHub repo page and reproduced here just to protect against link rot)
using System;
using System.Linq;
using nClam;
class Program
{
static void Main(string[] args)
{
var clam = new ClamClient("localhost", 3310);
var scanResult = clam.ScanFileOnServer("C:\\test.txt"); //any file you would like!
switch(scanResult.Result)
{
case ClamScanResults.Clean:
Console.WriteLine("The file is clean!");
break;
case ClamScanResults.VirusDetected:
Console.WriteLine("Virus Found!");
Console.WriteLine("Virus name: {0}", scanResult.InfectedFiles.First().VirusName);
break;
case ClamScanResults.Error:
Console.WriteLine("Woah an error occured! Error: {0}", scanResult.RawResult);
break;
}
}
}
There are also APIs available for refreshing the virus definition database. All the necessary ClamAV files can be included in the deployment package and any configuration can be put into the service start-up code.
ClamAV is a good idea, specially now that 0.99 is about to be released with YARA rule support - it will make it really easy for you to write custom rules and allow clamav to use tons of good YARA rules in the open today.
Another route, and a bit of shameless plugging, is to check out scanii.com, it's a SaaS for malware/virus detection and it integrates quite nicely with AWS and Azures.
There are a number of options to achieve this:
Firstly you can use ClamAV as already mentioned. ClamAV doesn't always receive the best press for its virus databases but as others have pointed out it's easy to use and is expandable.
You can also install a commercial scanner, such as avg, kaspersky etc. Many of these come with a C API that you can talk to directly, although often getting access to this can be expensive from a licensing point of view.
Alternatively you can make calls to the executable directly using something like the following to capture the output:
var proc = new Process {
StartInfo = new ProcessStartInfo {
FileName = "scanner.exe",
Arguments = "arguments needed",
UseShellExecute = false,
RedirectStandardOutput = true,
CreateNoWindow = true
}
};
proc.Start();
while (!proc.StandardOutput.EndOfStream) {
string line = proc.StandardOutput.ReadLine();
}
You would then need to parse the output to get the result and use it within your application.
Finally, now there are some commercial APIs available to do this kind of thing such as attachmentscanner (disclaimer I'm related to this product) or scanii. These will provide you with an API and a more scalable option to scan specific files and receive the response from at least one virus checking engine.
New thing coming Spring / Summer 2020. Advanced threat protection for Azure Storage includes Malware Reputation Screening, which detects malware uploads using hash reputation analysis leveraging the power of Microsoft Threat Intelligence, which includes hashes for Viruses, Trojans, Spyware and Ransomware. Note: cannot guarantee every malware will be detected using hash reputation analysis technique.
https://techcommunity.microsoft.com/t5/Azure-Security-Center/Validating-ATP-for-Azure-Storage-Detections-in-Azure-Security/ba-p/1068131
Is there a way to expose a Java rest web service in Liferay but not in a portlet, that can receive JSON request and store the data in Journal Article?
Therefore when a user logs into Liferay they will be see web content
Yes there is : JSONWebServiceActionsManagerUtil.registerJSONWebServiceAction
For instance :
Class<?> serviceImplClass;
Method serviceMethod;
Object serviceImpl;
String path = jsonWebServiceMappingResolver.resolvePath(serviceImplClass, serviceMethod);
String method = jsonWebServiceMappingResolver.resolveHttpMethod(serviceMethod);
JSONWebServiceActionsManagerUtil.registerJSONWebServiceAction("/yourwspath", serviceImpl, serviceImplClass, serviceMethod, path, method);
You should then be able to see the new web service in http://SERVER/api/jsonws
Well yes, Liferay has a full API (even JSON-based, SOAP optional, no classic REST though) that you can use. A simple Stackoverflow answer is not the right place to give a full introduction on how to work with Liferay's API, but you might want to look up Servicebuilder (which is used to create Liferay's API) and then look at JournalArticleService and related services: The Web Content Management API is called "Journal" in Liferay (for historical reasons)
I'm working on a personal project to manage users of my club, it's hosted on the free Azure package (for now at least), partly as an experiment to try out Azure. Part of creating their records is to add a photo, so I've got a Contact Card view that lets me see who they are, when they came and a photo.
I have installed ImageResizer and it's really easy to resize the 10MP photos from my camera and save them to the file system locally, but it seems that for Azure I need to use their Blobs to Upload Pictures to Windows Azure Web Sites, and that's new to me. The documentation on ImageResizer says that I need to use AzureReader2 in order to work with Azure blobs but it isn't free. It also says in their best practices #5 to
Use dynamic resizing instead of pre-resizing your images.
Which is not what I was thinking, I was going to resize to 300x300 and 75x75 (for thumbnail) when creating the users record. But if I should be storing full size images as blobs and dynamically resizing on the way out then can I just use standard means to Upload a blob into a container to save it to Azure, then when I want to display the images use the ImageResizer and pass it each image to resize as required. That way not needing to use the AzureReader2, or have I misunderstood what it does / how it works?
Is there another way to consider?
I've not yet implemented cropping, but that's next to tackle when I've worked out how to actually store the images properly
With some trepidation, I'm going to disagree with astaykov here. I believe you CAN use ImageResizer with Azure WITHOUT needing AzureReader2. Maybe I should qualify that by saying 'It works on my setup' :)
I'm using ImageResizer in an MVC 3 application. I have a standard Azure account with an images container.
Here's my test code for the view:
#using (Html.BeginForm( "UploadPhoto", "BasicProfile", FormMethod.Post, new { enctype = "multipart/form-data" }))
{
<input type="file" name="file" />
<input type="submit" value="OK" />
}
And here's the corresponding code in the Post Action method:
// This action handles the form POST and the upload
[HttpPost]
public ActionResult UploadPhoto(HttpPostedFileBase file)
{
// Verify that the user selected a file
if (file != null && file.ContentLength > 0)
{
string newGuid = Guid.NewGuid().ToString();
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConfigurationManager.AppSettings["StorageConnectionString"]);
// Create the blob client.
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
// Retrieve reference to a previously created container.
CloudBlobContainer container = blobClient.GetContainerReference("images");
// Retrieve reference to the blob we want to create
CloudBlockBlob blockBlob = container.GetBlockBlobReference(newGuid + ".jpg");
// Populate our blob with contents from the uploaded file.
using (var ms = new MemoryStream())
{
ImageResizer.ImageJob i = new ImageResizer.ImageJob(file.InputStream,
ms, new ImageResizer.ResizeSettings("width=800;height=600;format=jpg;mode=max"));
i.Build();
blockBlob.Properties.ContentType = "image/jpeg";
ms.Seek(0, SeekOrigin.Begin);
blockBlob.UploadFromStream(ms);
}
}
// redirect back to the index action to show the form once again
return RedirectToAction("UploadPhoto");
}
This is 'rough and ready' code to test the theory and could certainly stand improvement but, it does work both locally and when deployed on Azure. I can also view the images I've uploaded, which are correctly re-sized.
Hope this helps someone.
The answer to the concrete question:
If using ImageResizer with Azure blobs do I need the AzureReader2
plugin?
is YES. And as described in the Image Resizer's documentation - that plugin is used to read/process/serve images out of Blob Storage. So there is no doubt - if you are going to use Image Resizer, AzureReader2 is your needed plugin to make things right. It will take care of Blob uploads/serve.
Although I question Image Resizer's team competency on Windows Azure, since they are referencing Azure SDK v.2, while the most current version for Azure SDK is 1.8. What they mean is the Azure Storage Client Library, which has versions 1.7 and 2.x. Whereas version 2.x is recommended one to use and comes with Azure SDK 1.8. So, do not search for Azure SDK 2.0, install the latest one, which is 1.8. And by the way, use the Nuget Package Manager to install the Azure Storage Library v. 2.0.x.
You can also upload resized versions to azure. So, you first upload the original image as a blob, say with the name /original/xxx.jpg; then you create a resize of the image and upload that to azure with the name say /thumbnail/xxx.jpg. If you want to create the resized versions on the fly or on a separate thread, you may need to temporarily save the original to disk.
I have some Test, Security, Project Management and some other word documents in TFS2010 source control under Documents folder. Does anybody know how to access them in order to download and copy to a local path?
Those files are not physically under $/... folder, though they have a Sharepoint web server path like: "http://myServer/sites/MyProyect/Test/Tests_P13_F00120.doc". I have tried to use DownloadFiles activity without success due to it needs a path which starts with $/. Any suggestion please?
DownloadFiles is not an activity you can make use of, it's meant to deal with files residing in Source control.
Instead, you need to establish a connection to the Sharepoint Copy service of your TFS, which resides at http://<Site>/_vti_bin/Copy.asmx. We did this by adding a Service Reference in our build solution.
We then implemented a build activity that does basically the opposite of what you are after: during TFS build it uploads documents into Sharepoint.
The instantiation looks like this:
BasicHttpBinding binding = new BasicHttpBinding();
binding.Security.Mode = BasicHttpSecurityMode.TransportCredentialOnly;
binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Ntlm;
binding.Security.Transport.ProxyCredentialType = HttpProxyCredentialType.None;
binding.Security.Message.ClientCredentialType = BasicHttpMessageCredentialType.UserName;
EndpointAddress endpointAddress = new EndpointAddress("http://<Site>/_vti_bin/Copy.asmx");
CopySoapClient copyService = new CopySoapClient(binding,endpointAddress);
This copy service exposes a GetItem method, this is the one you should probably be invoking.
I 'm not aware if this GetItem is capable of supporting a http://myServer/sites/MyProject/Test/*.doc kind of thing