I'm using Requests for a program that interfaces with a server that has a well documented API, except for file upload which uses multipart/form-data including file and attributes. Every other request using JSON and that has not been an issue.
The docs say that the header must contain content-type:multipart/form-data but I have realized that requests does that automatically. It then lists a number of text attributes in the form of author.fullName: Heidi Walker and then a file attribute in the form of content: [physical file]
I have a working example in Postman but I can't get it to work with Requests. This is the code I have:
header = {
'session_id':sID,
}
# These are the required text attributes from the API docs + the actual file
fileInfo = {
'author.fullName' : 'Fred',
'category.guid' : 'xxx',
'description' : 'Data Sheet',
'format' : 'PDF',
'private' : 'false',
'storageMethodName' : 'FILE',
'title ' : 'Test Datasheet',
'content': open(path, 'rb')
}
resp = requests.post(url + '/files', headers = header, files = fileInfo)
I keep getting back 400 errors from the server.
Also, is their any way to see the formatted body of the request? So I can check that the boundary tags have been added correctly and compare to what Postman created?
I have been struggling with this for way to long so any help would be greatly appreciated.
Update:
I was able to enable logging with the logging module as outlined on this page: https://requests.readthedocs.io/en/master/api/
Inspecting the body of the request I see:
Content-Disposition: form-data; name="author.fullName"; filename="author.fullName"\r\n\r\nFred\r\n
What I would like to see (working postman example) is:
Content-Disposition: form-data; name=\"author.fullName\"\r\n\r\nFred\r\n
It seems that filename="author.fullName" is being inserted on every line.
Finally got this working, here is the code if anyone else is struggling with this.
multi = MultipartEncoder(
fields={'author.fullName' : 'Automatic Upload',
'category.guid' : 'XXX',
'description' : 'Data Sheet',
'edition' : '01',
'format' : 'PDF',
'private' : 'false',
'storageMethodName' : 'FILE',
'title ' : title,
'content': (fileName, open(path, 'rb'), 'application/pdf')}
)
mpHeader = {
'Content-Type': multi.content_type,
'session_id':sID,
'cache-control': "no-cache",
}
resp = requests.post(url + '/files', headers = mpHeader, data = multi)
Related
import requests
import json
url = "********"
payload = json.dumps({
"username": "*****",
"password": "*****"
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
#*** Convert response into a dictionary ***
r = json.loads(response.text)
# *** print the value of they key 'accessToken'
bearerToken = r['accessToken']
print(bearerToken)
Output
{'accessToken': 'e************************************', 'tokenType': 'Bearer'}
What am I trying to achieve?
Grab only the censored code after 'accessToken' and store the new Access Token in a string to use it in HTTP requests.
Note: 'accessToken' is a value of another key called accessToken. So the traditional method of printing the value of the key has already been used in the output shown above.
Complete output:
{
"accessToken" : {
"accessToken" : "e******************",
"tokenType" : "Bearer"
},
"refreshToken" : {
"id" : "6*************",
"lastAccessedTime" : 1***********,
"refreshToken" : "e*************"
}
}
In Python, you can use substrings to accomplish what you want. If the length of the prefix is constant, then all you would need to do is use a slice with constant index,
In your example, it looks like you have an accessToken key, and inside that key is another dictionary holding a key that can change between entries. Assuming that you want the censored portion after e in accessToken, you can access that using:
bearerToken = r['accessToken'][accessToken][1:]
This will give you the access token "******************", which is everything from index 1 onwards. If you just want the entire string, you can omit the [1:] portion.
The solution was to edit the code provided by yeeshue99, the 2nd value to 'accessToken' with the '' marks. the [1:] was also not needed.
Solution
bearerToken = r['accessToken']['accessToken']
I'm trying to use Openscale to check the explainibiltiy of my image classification model(Keras:2.2.4、tensorflow:1.11)
So far, I have finished the configuration and able to see the explainability of my first scoring request. However, when I tried to send a new request, the record was sent to PayloadError table with error message as title.
Am I sending a wrong payload record?
the part of my code is as below:
imagefile='test_image\\fusion\\Black-sample05-basyo1-muki14_6_3.JPG'
img = cv2.imread(imagefile)
img_resized = cv2.resize(img,(104, 104))
print(img_resized .shape)
im = np.array(img_resized )
im_data = np.uint8(im)
im_data2 = im_data[:,:,:3]
print( 'shape2: ', im_data2.shape)
im_data3 = im_data2.tolist()
print(im_data3)
header = {'Content-Type': 'application/json', 'Authorization': 'Bearer ' + iam_token}
payload_scoring = {"values": [im_data3] }
scoring_url="https://us-south.ml.cloud.ibm.com/v3/wml_instances/564d5095-31bf-4b1d-98e3-114cf2b2f409/deployments/3a60a744-dadf-481f-b0f7-512963cc8ce3/online"
response_scoring = requests.post(scoring_url, json=payload_scoring, headers=header)
print("Scoring response")
print(json.loads(response_scoring.text))
>{'fields': ['prediction', 'prediction_classes', 'probability'], 'values': [[[1.0, 0.0], 0, [1.0, 0.0]]]}
You should not set any control field for scoring_input. I see that scoring_input has predicted_target_field (decoded-target) set.
If you set it, the easiest way would be to delete this subscription and try your steps without setting any control field for scoring_input field
For example, I've installed the Cro module, when I run my simple code:
my %headers = {Authorization => OAuth realm="", oauth_consumer_key="xxxxxxxxxxxxxxxx", oauth_nonce="29515362", oauth_signature="KojMlteEAHlYjMcLc6LFiOwRnJ8%3D", oauth_signature_method="HMAC-SHA1", oauth_timestamp="1525913154", oauth_token="xxxx-xxxxxxxxxxxxxxxxxx", oauth_version="1.0", User-Agent => Cro};
my $resp = await Cro::HTTP::Client.get: 'http://api.fanfou.com/statuses/home_timeline.json',
headers => [
user-agent => 'Cro',
content-type => 'application/json;charset=UTF-8',
|%headers
];
say $resp.header('content-type'); # Output: application/json; charset=utf-8;
my Str $text = await $resp.body-text();
And it says 'Could not parse media type application/json; charset=utf-8;
Died with the exception:
Could not parse media type 'application/json; charset=utf-8;'
in method parse at /Users/ohmycloud/.perl6/sources/5B710DB8DF7799BC8B40647E4F9945BCB8745B69 (Cro::MediaType) line 74
in method content-type at /Users/ohmycloud/.perl6/sources/427E29691A1F7367C23E3F4FE63E7BDB1C5D7F63 (Cro::HTTP::Message) line 74
in method body-text-encoding at /Users/ohmycloud/.perl6/sources/427E29691A1F7367C23E3F4FE63E7BDB1C5D7F63 (Cro::HTTP::Message) line 83
in block at /Users/ohmycloud/.perl6/sources/F870148C579AB45DEB39F02722B617776C3D6D5F (Cro::MessageWithBody) line 49
It seems that application/json; charset=utf8; is not a valid content-type, so I add a test:
use Cro::MediaType;
use Test;
sub parses($media-type, $desc, &checks) {
my $parsed;
lives-ok { $parsed = Cro::MediaType.parse($media-type) }, $desc;
checks($parsed) if $parsed;
}
parses 'application/json; charset=utf-8;', 'application/json media type with charset', {
is .type, 'application', 'Correct type';
is .subtype, 'json', 'Correct subtype';
is .subtype-name, 'json', 'Correct subtype name';
is .tree, '', 'No tree';
is .suffix, '', 'No suffix';
is .Str, 'application/json; charset=utf-8;', 'Stringifies correctly';
};
done-testing;
And the output is:
not ok 1 - application/json media type with charset
# Failed test 'application/json media type with charset'
# at cro_media.pl6 line 6
# Could not parse media type 'application/json; charset=utf-8;'
1..1
# Looks like you failed 1 test of 1
the source code seems locate in the /Users/ohmycloud/.perl6/sources/5B710DB8DF7799BC8B40647E4F9945BCB8745B69 file, and I add ';'? after the TOP token:
token TOP { <media-type> ';'? }
save, and run my code again, but the error is the same. So how to make the change work? In Perl 5, I can just edit my .pm module, but in Perl 6, I dont't know what to do.
In this answer in zef's issues, they state that "installations are immutable".
It's probably a better option if you download Cro from its source, patch it and install again so that your application picks up the new version.
It might also happen that 'application/json' does not admit that charset declaration, or that there should be no space behind the ;. But the main issue here is that you shouldn't edit modules once installed.
As jjmerelo mentioned, installations are immutable, one solution is to download source code(include META6.json file), edit the code you want, then:
zef install . --/test
For simple test, it's ok for me.
As for application/json; chartset=utf-8; can't parse, I add a ; in MediaType.pm6's token token, make it possible to include ; (maybe it's a bug, I don't know):
token token { <[A..Za..z0..9;!#$%&'*+^_`{|}~-]>+ }
installed locally, and it parse ok now.
I have a scripted pipeline. In one of my steps I want to send different mails based on test results. Here is how I do it now:
if (buildResult == 'SUCESSS'
def email_body="TEST_SUCESS.template"
else
def email_body="TEST_FAILURES.template"
emailext(
subject: "Job '${env.JOB_NAME} [${env.BUILD_NUMBER}] finished",
body: "${SCRIPT,template=$email_body}", // LINE A
recipientProviders: [[$class: 'DevelopersRecipientProvider']],
to: 'XXXX',
from: 'YYYY',
replyTo: 'ZZZZ',
mimeType: 'text/html',
)
I can't have Jenkins expand the value of the variable email_body. I've tried various approaches in the line A:
"${SCRIPT,template=$email_body}"
"${SCRIPT,template=${email_body}}"
'''${SCRIPT,template=$email_body}'''
'''${SCRIPT,template=${email_body}}'''
None of them works. All I get in the email is either:
Groovy Template file [$email_body] was not found in $JENKINS_HOME/email-templates.
or
${SCRIPT,template=$email_body}.
What is the correct way of setting email content if the email content is stored in a variable?
Try to use this example:
String subject = "${env.JOB_NAME} was " + currentBuild.result.toString();
String email_body="TEST_" + currentBuild.result.toString() + ".template"
String body = "SCRIPT,template=" + email_body;
String to="some_mail#mail.com"
String reply="ZZZ"
emailext(subject: subject, body: body, to: to, replyTo: reply);
I am trying to parse a CSV file using NodeJS.
so far I have tried these packages:
Fast CSV
YA-CSV
I would like to parse a CSV file into objects based on header. I have been able to accomplish this with fast-csv but I have "'" values in my CSV file that I would like to ignore. I cant seem to do this with fast-csv even though I try to use the
{escape:'"'}
I used ya-csv to try to get around this but no values are being written when I try:
var reader = csv.createCsvFileReader('YT5.csv', {columnsFromHeader:true, 'separator': ','});
var writer = new csv.CsvWriter(process.stdout);
reader.addListener('YT5', function(data){
writer.writeRecord(data);
});
I get no output, any help would be great.
Thanks.
Edit:
I want the output in this format....
{ 'Video ID': '---kAT_ejrw',
'Content Type': 'UGC',
Policy: 'monetize',
'Video Title': 'Battlefield 3 Multiplayer - TDM na Kanałach Nouszahr (#3)',
'Video Duration (sec)': '1232',
Username: 'Indmusic',
Uploader: 'MrKacu13',
'Channel Display Name': 'MrKacu13',
'Channel ID': '9U6il2dwbKwE4SK3-qe35g',
'Claim Type': 'Audio',
'Claim Origin': 'Audio Match',
'Total Views': '11'
}
The header line is this.
Video ID,Content Type,Policy,Video Title,Video Duration (sec),Username,Uploader,Channel Display Name,Channel ID,Claim Type,Claim Origin,Total Views,Watch Page Views,Embedded Player Views,Channel Page Video Views,Live Views,Recorded Views,Ad-Enabled Views,Total Earnings,Gross YouTube-sold Revenue,Gross Partner-sold Revenue,Gross AdSense-sold Revenue,Estimated RPM,Net YouTube-sold Revenue,Net AdSense-sold Revenue,Multiple Claims?,Category,Asset ID,Channel,Custom ID,ISRC,GRid,UPC,Artist,Asset Title,Album,Label
All of these values will be filled in with some areas like title having single quotes.
This looks a mess so let me know if you need another format.
This was resolved by using ya-csv
In order to use this I had to do a little more research but I did not know to add another listener for the data that I wanted to read and that it was just called 'data'
var reader = csv.createCsvFileReader('YT5.csv', {columnsFromHeader:true, 'separator': ','});
var writer = new csv.CsvWriter(process.stdout);
reader.addListener('data', function(data){
do something with data
}
reader.addListener('end', function(){
console.log('thats it');
}
This read the file without any issues from the single quotes.