I am new to libcurl and all the cloud technology.
I need to store and retrieve 1024 byte objects in the cloud as objects where each object has 2 properties the file name and an id.
Can someone please guide me how to solve this problem or just give me an example in C so that I will be able to understand this whole procedure.
Now I can not even figure out how to authenticate with SoftLayer.
Take a look at this documentation, it contains information about how to authenticate and upload files using Curl:
http://sldn.softlayer.com/blog/waelriac/Managing-SoftLayer-Object-Storage-Through-REST-APIs
As you can see to work with Softlayer Object Storage you only need to execute some simple HTTP GET and Post requests. You can see examples about how to do that using libcurl here http://curl.haxx.se/libcurl/c/example.html
If you need more documentation about Softlayer object storage see:
http://sldn.softlayer.com/reference/objectstorageapi
I hope it helps
Regards
Related
Can anybody give me sample code in python to find the folder Id (especially the last folder created) in google drive? Your help will be immensely appreciated.
Stackoverflow and the Drive API documentation have enough samples of python code for Google Drive API requests, you just need to define the basic steps and patch the corresponding code parts together
Any Google Drive API request need to be based on the Drive API Quickstart for Python which implements OAuth2 authorization flow and creation of an authenticated service.
Once you have this, you can list your files to retrieve their Ids.
In order to narrow down the results, you can define the search parameter q, e.g. specifying mimeType = 'application/vnd.google-apps.folder'.
Use the parameter orderBy to request that the most recent modified folders will be shown first.
Use the paramter pageSize to define how many results you want to obtain (if you want to obtain the newst folder Id only - 1 is a valid value).
Stackoverflow is not meant to help you write a code from scratch, I recommend you to search e.g. with the following specifications for similar questions and try to patch together your code yourself.
Then, if necessary, post a new question with your code explaining where you got stuck and asking for specific help.
Hint: Before implementing your request into Python, test it with the "Try this API" functionality of the Files:
list to make sure that you adapted the parameters correctly to your needs
I want to access MWS Inbound Shipments API form C# in a similar way I do with the Amazon Reports using MarketplaceWebService
But I don't know how to to it.
For example how can I do this call form c# : https://docs.developer.amazonservices.com/en_US/fba_inbound/FBAInbound_ListInboundShipmentItems.html?
The easiest and quickest approach is to download the C# SDK for MWS, where you can create a client and call the ListInboundShipmentItems method directly. For this particular operation, ShipmentId is required. All you need to do is add your AWS keys and token (if applicable) and make the call. I noticed that the SDK used to be publicly available, but I think now you are required to log in with a seller account.
If you are not using the SDK, your request should look like this:
http://mws.amazonaws.com/FulfillmentInboundShipment/2010-10-01/
?Action=ListInboundShipmentItems
&Version=2010-10-01
&AWSAccessKeyId=1QZHP81EXAMPLEN5R44N
&MWSAuthToken=amzn.mws.4ea38b7b-f563-7709-4bae-87aeaEXAMPLE
&SignatureVersion=2
&SignatureMethod=HmacSHA256
&Signature=VY6sqvdk01VeEXAMPLEG0Vh4oj3
&Timestamp=2015-12-01T02:40:36Z
&SellerId=1234567890
&ShipmentId=SSF85DGIZZ3OF1
Make sure you read the developer guide too.
I am trying to automate some simple updating of a Google spreadsheet and I'm using the gspread library to do so. One task that is critical and not currently supported by gspread is the ability to add comments to a specific cell (there's an open issue for this and even a gist solution but I was getting a 404 error when trying to use it).
I know that the Google Drive API (v3) supports adding comments as described here, but I'm having issues with authenticating and could use some help.
What I have/know:
I have already setup the OAuth 2.0 and registered for the API through Google, as well as have the client_secret.json in my directory, but my knowledge of web requests and responses is limited so going through the Drive API documentation hardly makes sense. I know in order to create the comments I will have to make use of anchors and specify the cell location using column/row numbers.
What I'm stuck on:
When using the Google API Explorer, I'm getting a 400 error with the message: The 'fields' parameter is required for this method. How can I make the POST request using my authentication? I think from there I'd be able to actually add the comments myself.
I'm getting a 400 error with the message: The 'fields' parameter is required for this method
The error is asking for a property which you want returned (these properties are listed in Drive API files resource).
You can just place ' * ' to indicate you want it to return a complete response. That's the quick fix.
Im working with a PHP script that POSTs to a GPService Toolbox (written in python), the first parameter is supposed to be a GPDataFile. From the documentation, it looks like I can set the value of this parameter to a json formatted string literal, {"url", "http://localhost/export/1234567890.kml"}, and the arcpy.GetParameter(0) should handle this object correctly.
Unfortunately I am receiving an error, saying 'Please check your parameters', there are two other parameters on the toolbox but they are just strings and are working correctly. I am working in ArcGIS 10.0.
The overall goal of this interaction is to send a KML file from our SWF/ActionScript to the PHP, which saves the KML to our database and subsequently sends it to the GPService to translate it into a GDB and then to individual shapefile objects that are stored in the database for rendering back to the SWF/Actionscript.
Any help our thoughts on how to get the Toolbox to accept the JSON structure would be greatly appreciated, I would like to avoid having to send the KML contents as a string object to the Toolbox.
Answer can be what maniksundaram wrote in ESRI forum (https://community.esri.com/thread/107738):
ArcGIS server will not support direct GPDataFile upload. You have to upload the file using upload task and give the item id for the GP service.
Here is the high level idea to get it work for any GP service which needs file upload,
-Publish the Geoprocessing service with upload option
Refer : ArcGIS Help (10.2, 10.2.1, and 10.2.2)
Operations allowed: Uploads: This capability controls whether a client can upload a file to your GIS server that the tasks within the geoprocessing service would eventually use. The upload operation is mainly used by web clients that need a way to send a file to the server for processing. The upload operation returns a unique ID for the file after the upload completes, which the web application could pass to the geoprocessing service. You may need to modify the maximum file size and timeouts depending on how large an upload you want your server to accept. Check the local REST SDK documentation installed on your ArcGIS Server machine for information on using an uploaded file with a geoprocessing service. This option is off by default. Allowing uploads to your service could possibly pose a security risk. Only turn this on if you need it.
-Upload the file using the upload url that is generated in the geoprocessing service . It will give you the itemID of the uploaded file in response.
http://<servername>:6080/arcgis/rest/services/GP/ConvertKMLToLayer/GPServer/uploads/upload
Response Json:
{"success":true,"item":{"itemID":"ie84b9b8a-5007-4337-8b6f-2477c79cde58","itemName":"SStation.csv","description":null,"date":1409942441508,"committed":true}}
-Invoke the geoprocessing service with the item id as the GPDataFile input ,
For Ex: KMLInput value would be {"itemID":"ie84b9b8a-5007-4337-8b6f-2477c79cde58"}
-The result will be added to map service with job id if you have configured the view the GP results in a map service. Or you can read the response as it returns.
I am very new to Salesforce and it's API.
I am having a sandbox org and with it I have url, username, password, security token and last but not the least partner.wsdl
My aim was to connect and retrieve/create data.
Technologies at hand was nodejs
So here is how I started.
I searched over the internet and came to know that I need to create a client, SOAP client in order to login, create the connection and use that connection to create and access the Leads data.So I followed this sample where the wsdl was being consumed.
So I was able to connect
I was very happy on this success and then suddenly I wasn't able to identify/find a way where I can get the sObject.I looked hard for this but no luck. So posted a question on SO
Meanwhile I also looked for other node module and found jsforce
I used jsforce starting guide and created a client that was connecting to salesforce however without using the wsdl file.
Again I was happy, even more happy because I was having the sObject with me.
Now, what is the fundamental difference if I login using the local wsdl file and without wsdl file in the language of salesforce. Which one is the correct way of logging in?
Sorry if this question is not according to SO rules or if there is a typo.
I'm the author of jsforce you mentioned.
In that lib we use REST API mostly and SOAP APIs are only used in some specific calls like login or metadata. Even in such calls we don't use WSDLs because there's no good lib to generate client modules from WSDL in JavaScript/Node.js area. Instead we wrote modules for each APIs which generate SOAP XML strings and parse the response XML.
It is enough because these API message schema are static and fiesible in the specific version, unlike SOAP API Enterprise WSDL differs in organizations. So we can hard code the client module directly w/o generate it from WSDL.