ESRI GPDataFile as input Parameter to GP Toolbox - geospatial

Im working with a PHP script that POSTs to a GPService Toolbox (written in python), the first parameter is supposed to be a GPDataFile. From the documentation, it looks like I can set the value of this parameter to a json formatted string literal, {"url", "http://localhost/export/1234567890.kml"}, and the arcpy.GetParameter(0) should handle this object correctly.
Unfortunately I am receiving an error, saying 'Please check your parameters', there are two other parameters on the toolbox but they are just strings and are working correctly. I am working in ArcGIS 10.0.
The overall goal of this interaction is to send a KML file from our SWF/ActionScript to the PHP, which saves the KML to our database and subsequently sends it to the GPService to translate it into a GDB and then to individual shapefile objects that are stored in the database for rendering back to the SWF/Actionscript.
Any help our thoughts on how to get the Toolbox to accept the JSON structure would be greatly appreciated, I would like to avoid having to send the KML contents as a string object to the Toolbox.

Answer can be what maniksundaram wrote in ESRI forum (https://community.esri.com/thread/107738):
ArcGIS server will not support direct GPDataFile upload. You have to upload the file using upload task and give the item id for the GP service.
Here is the high level idea to get it work for any GP service which needs file upload,
-Publish the Geoprocessing service with upload option
Refer : ArcGIS Help (10.2, 10.2.1, and 10.2.2)
Operations allowed: Uploads: This capability controls whether a client can upload a file to your GIS server that the tasks within the geoprocessing service would eventually use. The upload operation is mainly used by web clients that need a way to send a file to the server for processing. The upload operation returns a unique ID for the file after the upload completes, which the web application could pass to the geoprocessing service. You may need to modify the maximum file size and timeouts depending on how large an upload you want your server to accept. Check the local REST SDK documentation installed on your ArcGIS Server machine for information on using an uploaded file with a geoprocessing service. This option is off by default. Allowing uploads to your service could possibly pose a security risk. Only turn this on if you need it.
-Upload the file using the upload url that is generated in the geoprocessing service . It will give you the itemID of the uploaded file in response.
http://<servername>:6080/arcgis/rest/services/GP/ConvertKMLToLayer/GPServer/uploads/upload
Response Json:
{"success":true,"item":{"itemID":"ie84b9b8a-5007-4337-8b6f-2477c79cde58","itemName":"SStation.csv","description":null,"date":1409942441508,"committed":true}}
-Invoke the geoprocessing service with the item id as the GPDataFile input ,
For Ex: KMLInput value would be {"itemID":"ie84b9b8a-5007-4337-8b6f-2477c79cde58"}
-The result will be added to map service with job id if you have configured the view the GP results in a map service. Or you can read the response as it returns.

Related

usage of DoNotDistributeEvent tag in api response for one drive and share point

I am working on onedrive and share point api and i am observing that for file accessed events one drive and share point is giving response with tag DoNotDistributeEvent i wanted to know the importance of that tag
i am trying to filter the events generated by microsoft internally for file accessed based on DoNotDistributeEvent flag in api response

How to download a file from website by using logic app?

how you doing?
I'm trying to download a excel file from a web site (Specifically DataCamp) in order to use its data into an automatic process, but before to get the file is necessary to sign in on the page. I was thinking that this would be possible with the JSON Query on the HTTP action, but to be honest I don't know where to start (I'm new on Azure).
The process that I need to emulate to get the file extraction would be as follow (I know this could be possible with an API or RPA but I don't have any available for now):
Could you tell me guys some advices (how to get the desired result or at least where to make research)? is this even posibile?
Best regards.
If you don't have other ways, e.g. your source is on an SFTP, etc. than using an HTTP Action should work, pass the BODY to your next action (e.g. you might want to persist that on a BLOB if content is binary).
If your content is "readable", e.g. JSON, CSV and want to load for processing, you need to ensure, for large files, that you read it in Chunks to load it completely before processing.
Detailed explanation at https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-handle-large-messages#download-content-in-chunks

NodeJS, how to handle image uploading with MongoDB?

I would like to know what is the best way to handle image uploading and saving the reference to the database. What I'm mostly interested is what order do you do the process in?
Should you upload the images first in the front-end (say Cloudinary), and then call the API with result links to the images and save it to the database?
Or should you upload the images to the server first, and upload them from the back-end and save the reference afterwards?
OR, should you do the image uploading after you save the record in the database and then update it once the images were uploaded?
It really depends on the resources, timeline, and number of images you need to upload daily.
So basically if you have very few images to upload then you can upload that image to your server then upload it to any cloud storage(s3, Cloudinary,..) you are using. As this will be very easy to implement(you can find code snippet over the internet) and you can securely maintain your secret keys/credential to your cloud platform on the server side.
But, according to me best way of doing this will be something like this. I am taking user registration as an example
Make server call to get a temporary credential to upload files on the cloud(Generally, all the providers give this functionality i.e. STS/Signed URL in AWS).
The user will fill up the form and select the image on the client side. When the user clicks the submit button make one call to save the user in the database and start upload with credentials. If possible keep a predictable path for upload. Like for user upload /users/:userId or something like that. this highly depends on your use case.
Now when upload finishes make a server call for acknowledgment and store some flag in the database.
Now advantages of this approach are:
You are completely offloading your server from handling file operations which are pretty heavy and I/O blocking and you are distributing that load to all clients.
If you want to post process the files after upload you can easily integrate this with serverless platforms and do that on there and again offload that.
You can easily provide retry mechanism to your users in case of file upload fails but they won't need to refill the data, just upload the image/file again
You don't need to expose the URL directly to the client for file upload as you are using temporary Creds.
If the significance of the images in your app is high then ideally, you should not complete the transaction until the image is saved. The approach should be to create an object in your code which you will eventually insert into mongodb, start upload of image to cloud and then add the link to this object. Finally then insert this object into mongodb in one go. Do not make repeated calls. Anything before that, raise an error and catch the exception
You can have many answers,
if you are working with big files greater than 16mb please go with gridfs and multer,
( changing the images to a different format and save them to mongoDB)
If your files are actually less than 16 mb, please try using this Converter that changes the image of format jpeg / png to a format of saving to mongodb, and you can see this as an easy alternative for gridfs ,
please check this github repo for more details..

Manage security on file upload to nodejs

I have an image upload view on my client (ember.js) that send the resized image to nodejs rest api;
it works well but it is easy for someone expert to force upload of a non-resized image;
I would like to keep the resize process on the client because this allows users to select heavy-weight images, that are resized locally and uploaded only after that, when they are lightweight;
If someone else uses something like this, I'm interested on how it is possible to make this as safe as possible;
As a rule of thumb when developing web applications is never ever trust any data coming from the client side, always try to do a check in your server side!
Use authentication, this ensures that user only allow to upload data to their own account and not fiddling others files.
Add a special message passing between your server and client, a simple example would be
i. send a post API request first (that contains the image information and targeted compressed size) to your server indicating that your client is starting to compress the picture
ii. when uploading, add a metadata to include the complete compressed image, and check the uploaded image with your server if it is within the accepted threshold, else discard it
You could enhance the security of the message passing to be more complicated!
This would be my simple security, anyone else got better solution? :)
Approaches here also work for file uploads. You can use a combination of checking:
content-length header and/or (i.e. req.headers['content-length'] > x)
reading stream size as it's being read by server. (i.e req.on('data'))
If the stream data exceeds a certain size you can respond accordingly. Check out something like Multer for file uploads, specifically the limits section. Best approach would probably the second option.

Can I send data to a RemoteApp using Remote Desktop Services?

When I launch a RemoteApp via Remote Desktop Web Access, is there a way to send data to the remote app?
Desired senario:
A user logs into a website with their credentials. They also provide demographic information such as first name, last name, address, etc.
The website connects to the RemoteApp via SSO and makes the demographic information available to the RemoteApp.
For example, if the RemoteApp is a Windows Forms app, can I get this information and display it in a message box?
Edit1: TomTom's response in this question mentions using named pipes to send data. Is that applicable to this problem?
It turns out you can pass command line parameters to the RemoteApp using the remoteapplicationcmdline property like such:
remoteapplicationcmdline:s:/Parameter1: 5234 /Parameter2: true
(The names "/Parameter1" and "/Parameter2" are just examples. Your remote app will have to define and handle these as appropriate.)
This setting is part of the RdpFileContents property of the MsRdpClientShell object.
Here is a resource for other RdpFileContents properties.
Your code might end up looking something like this:
MsRdpClientShell.PublicMode = true;
MsRdpClientShell.RdpFileContents = 'redirectclipboard:i:1 redirectposdevices:i:0 remoteapplicationcmdline:s:/Parameter1: 5234 /Parameter2: true [Other properties here...]';
MsRdpClientShell.Launch();
For larger amounts of information, we might send preliminary data to a web service, retrieve an identifier back, pass this identifier to the RemoteApp via the command line, then have the RemoteApp query the web service to get all the information.
Of course, for the parameters to be of use the program must be looking for them. Setting up a database to query has a little security issue if it is sensitive data.
If the program (RemoteApp) is looking for data in the form of a CSV or table or something, then you might be able to send a lot of data to be processed. It just depends upon what parameters (and form) the program is going to use.

Resources