Any suggestion on this front?
partner.rb
LOGO_VALIDATIONS = {
max_size: 2.megabytes
}
has_one_attached :logo
validate :validate_logo_attachment
def validate_logo_attachment
if logo.attached? && logo.blob.byte_size > LOGO_VALIDATIONS[:max_size]
errors.add(:logo, 'must be less than 2 Mb')
end
end
partners_controller.rb
def change_logo
#current_partner.logo.purge
#current_partner.logo.attach(change_logo_params[:logo])
render_error(#current_partner.errors.full_messages) && return unless #current_partner.valid?
#current_partner.save!
render_success({logo_url: #current_partner.logo_url})
end
When I try to change the logo image which is more than 2 Mb I get the following expected error.
{
"result": "failed",
"messages": [
"Logo must be less than 2 Mb"
]
}
But when I check the image by rails_blob_path(#current_partner.logo, disposition: "attachment", only_path: true), I get the new logo image.
Related
Thanks for taking the time out to read this. I want to find a way of parsing the json below. I'm really struggling to get the correct values out. I am getting this info from an API, and want to save this data into a database.
I am really struggling to parse info_per_type because I first need to get the available_types. This can change depending on the info available (i.e. I might get 2 different types in the next call, there's a total of 4) so my code needs to be flexible enough to deal with this
```
{
"data": [
{
"home_team": "Ravenna",
"id": 82676,
"available_types": [
"type_a",
"type_b"
],
"info_per_type": {
"type_a": {
"options": {
"X": 0.302,
"X2": 0.61,
"X3": 0.692,
"X4": 0.698,
"X5": 0.39,
"X6": 0.308
},
"status": "pending",
"output": "12",
"option_values": {
"X": 3.026,
"X2": 1.347,
"X3": 1.516,
"X4": 1.316,
"X5": 2.936,
"X6": 2.339
}
},
"type_b": {
"options": {
"yes": 0.428,
"no": 0.572
},
"status": "pending",
"output": "no",
"option_values": {
"yes": null,
"no": null
}
}
}
}
]
}```
So far, I can get the available_types out. But after that, I'm stuck. I have tried eval and exec but I can't seem to get that working either.
```
r = requests.get(url, headers=headers).text
arrDetails = json.loads(r)
arrDetails = arrDetails['data']
x = arrDetails[0]['available_types']
print(x[1]) #I get the correct value here
y = exec("y = arrDetails[0]['info_per_type']['" + x[1] + "']")
print(y)```
When I print out y I get None. What I want is some way to reference that part of the json file, as the results within that node are what I need. Any help would be HIGHLY appreciated!
Something like this should work :
for row in arrDetails['data']:
for available_type in row['available_types']:
print(row['info_per_type'][available_type])
I am trying to rename the nested fields from Elasticsearch while migrating to Amazonelasticsearch
In the document, I want to change the
1.If the value field has JSON type. Change the value field to value-keyword and remove "value-whitespace" and "value-standard" if present
2.If the value field has a size of more than 15. Change the value field to value-standard
"_source": {
"applicationid" : "appid",
"interactionId": "716bf006-7280-44ea-a52f-c79da36af1c5",
"interactionInfo": [
{
"value": """{"edited":false}""",
"value-standard": """{"edited":false}""",
"value-whitespace" : """{"edited":false}"""
"title": "msgMeta"
},
{
"title": "msg",
"value": "hello testing",
},
{
"title": "testing",
"value": "I have a text that can be done and changed only the size exist more than 20 so we applied value-standard ",
}
],
"uniqueIdentifier": "a21ed89c-b634-4c7f-ca2c-8be6f31ae7b3",
}
}
the end result should be
"_source": {
"applicationid" : "appid",
"interactionId": "716bf006-7280-44ea-a52f-c79da36af1c5",
"interactionInfo": [
{
"value-keyword": """{"edited":false}""",
"title": "msgMeta"
},
{
"title": "msg",
"value": "hello testing",
},
{
"title": "testing",
"value-standard": "I have a text that can be done and changed only the size exist more than 20 and so we applied value-standard ",
}
],
"uniqueIdentifier": "a21ed89c-b634-4c7f-ca2c-8be6f31ae7b3",
}
}
For 2), you can do it like this:
filter {
if [_source][interactionInfo][2][value] =~ /.{15,15}/ {
mutate {
rename => ["[_source][interactionInfo][2][value]","[_source][interactionInfo][2][value-standard]"]
}
}
}
The regex .{15,15} matches any string 15 characters long. If the field is shorter than 15 characters long, the regex doesn't match and the mutate#rename isn't applied.
For 1), one possible solution would be trying to parse the field with the json filter and if there's no _jsonparsefailure tag, rename the field.
Founded the solution for this one. I have used a ruby filter in Logstash to check each and every document as well as nested document
Here is the ruby code
require 'json'
def register(param)
end
def filter(event)
infoarray = event.get("interactionInfo")
infoarray.each { |x|
if x.include?"value"
value = x["value"]
if value.length > 15
apply_only_keyword(x)
end
end
if x.include?"value"
value = x["value"]
if validate_json(value)
apply_only_keyword(x)
end
end
}
event.set("interactionInfo",infoarray)
return [event]
end
def validate_json(value)
if value.nil?
return false
end
JSON.parse(value)
return true
rescue JSON::ParserError => e
return false
end
def apply_only_keyword(x)
x["value-keyword"] = x["value"]
x.delete("value")
if x.include?"value-standard"
x.delete("value-standard")
end
if x.include?"value-whitespace"
x.delete("value-whitespace")
end
end
I am setting up the automatic data labelling pipeline for my colleague.
First, I define the ground truth request based on API (bucket, manifests, etc).
Second, I create this labelling job, and all files are uploaded in S3 immediately.
After that my colleague will receive an email saying it is ready to label it, then he will label the data and submit.
Until now, everything is well and quick. Then I check the SageMaker labelling job dashboard, it shows the task is in progress, and it takes very very long time to know it is completed or failed. I don't know the reason. Yesterday, it saved the results at 4 am, took around 6 hours. But if I create label job on website instead of sending requests, it will save the results quickly.
Can anyone explain it? Or maybe I need to set up a time sync or other configuration?
This is my config:
{
"InputConfig": {
"DataSource": {
"S3DataSource": {
"ManifestS3Uri": ""s3://{bucket_name}/{JOB_ID}/{manifest_name}-{JOB_ID}.manifest""
}
},
"DataAttributes": {
"ContentClassifiers": [
"FreeOfPersonallyIdentifiableInformation",
"FreeOfAdultContent"
]
}
},
"OutputConfig": {
"S3OutputPath": "s3://{bucket_name}/{JOB_ID}/output-{manifest_name}/"
},
"HumanTaskConfig": {
"AnnotationConsolidationConfig": {
"AnnotationConsolidationLambdaArn": "arn:aws:lambda:us-east-2:266458841044:function:PRE-TextMultiClass"
},
"PreHumanTaskLambdaArn": "arn:aws:lambda:us-east-2:266458841044:function:PRE-TextMultiClass",
"NumberOfHumanWorkersPerDataObject": 2,
"TaskDescription": "Dear Annotator, please label it according to instructions. Thank you!",
"TaskKeywords": [
"text",
"label"
],
"TaskTimeLimitInSeconds": 600,
"TaskTitle": "Label Text",
"UiConfig": {
"UiTemplateS3Uri": "s3://{bucket_name}/instructions.template"
},
"WorkteamArn": "work team arn"
},
"LabelingJobName": "Label",
"RoleArn": "my role arn",
"LabelAttributeName": "category",
"LabelCategoryConfigS3Uri": ""s3://{bucket_name}/labels.json""
}
I think my Lambda function is wrong, when I change to aws arn (preHuman and Annotation) everything works fine.
This is my afterLabeling Lambda:
import json
import boto3
from urllib.parse import urlparse
def lambda_handler(event, context):
consolidated_labels = []
parsed_url = urlparse(event['payload']['s3Uri']);
s3 = boto3.client('s3')
textFile = s3.get_object(Bucket = parsed_url.netloc, Key = parsed_url.path[1:])
filecont = textFile['Body'].read()
annotations = json.loads(filecont);
for dataset in annotations:
for annotation in dataset['annotations']:
new_annotation = json.loads(annotation['annotationData']['content'])
label = {
'datasetObjectId': dataset['datasetObjectId'],
'consolidatedAnnotation' : {
'content': {
event['labelAttributeName']: {
'workerId': annotation['workerId'],
'result': new_annotation,
'labeledContent': dataset['dataObject']
}
}
}
}
consolidated_labels.append(label)
return consolidated_labels
Are there any reasons?
My code is the following
def write_cells(spreadsheet_id, update_data):
updating = sheet_service.spreadsheets().values().\
batchUpdate(spreadsheetId=spreadsheet_id, body=update_data)
updating.execute()
spreadsheet_data = [
{
"deleteDimension": {
"range": {
"sheetId": sheet_id,
"dimension": "ROWS",
"startIndex": 5,
"endIndex": 100
}
}
}
]
update_spreadsheet_data = {
'valueInputOption': 'USER_ENTERED',
'data': spreadsheet_data
}
update_data = update_spreadsheet_data
write_cells(spreadsheet_id, update_data)
I have the following error message
HttpError Traceback (most recent call last)
<ipython-input-64-0ba8756b8e85> in <module>()
----> 1 write_cells(spreadsheet_id, update_data)
2 frames
/usr/local/lib/python3.6/dist-packages/googleapiclient/http.py in execute(self, http, num_retries)
838 callback(resp)
839 if resp.status >= 300:
--> 840 raise HttpError(resp, content, uri=self.uri)
841 return self.postproc(resp, content)
842
HttpError: <HttpError 400 when requesting https://sheets.googleapis.com/v4/spreadsheets/1lAI8gp29luZDKAS1m3P62sq0kKCn8eaMUvO_M_J8meU/values:batchUpdate?alt=json returned "Invalid JSON payload received. Unknown name "delete_dimension" at 'data[0]': Cannot find field.">
I don't understand this: "Unknown name delete_dimension". I'm unable to resolve it. Any help is appreciated, thanks.
You want to delete rows using Sheets API with Python.
If my understanding is correct, how about this modification?
Modification points:
When delete rows in Spreadsheet, please use spreadsheets().batchUpdate().
I think that spreadsheet_data is correct.
In this case, please modify the request body to {"requests": spreadsheet_data}.
Modified script:
def write_cells(spreadsheet_id, update_data):
# Modified
updating = sheet_service.spreadsheets().batchUpdate(
spreadsheetId=spreadsheet_id, body=update_data)
updating.execute()
spreadsheet_data = [
{
"deleteDimension": {
"range": {
"sheetId": sheet_id,
"dimension": "ROWS",
"startIndex": 5,
"endIndex": 100
}
}
}
]
update_spreadsheet_data = {"requests": spreadsheet_data} # Modified
update_data = update_spreadsheet_data
write_cells(spreadsheet_id, update_data)
Note:
This modified script supposes that you have already been able to use Sheets API.
Reference:
DeleteDimensionRequest
If I misunderstood your question and that was not the result you want, I apologize.
I am getting a JSON response from an webservice like below . I want to parse all childs of results node using Groovy Json slurper and assert the value is correct.
{
"status": "Healthy",
"results": [
{
"name": "Microservice one",
"status": "Healthy",
"description": "Url check MSOneURI success : status(OK)"
},
{
"name": "Microservice two",
"status": "Healthy",
"description": "Url check MSTwoURI success : status(OK)"
},
{
"name": "Microservice three",
"status": "Healthy",
"description": "Url check MSThreeURI success : status(OK)"
},
{
"name": "Microservice four",
"status": "Healthy",
"description": "Url check MSFourURI success : status(OK)"
},
{
"name": "Microservice five",
"status": "Healthy",
"description": "Url check MSFiveURI success : status(OK)"
}
]
}
This is what I have done - this works .
//imports
import groovy.json.JsonSlurper
import groovy.json.*
//grab the response
def ResponseMessage = messageExchange.response.responseContent
// trim starting and ending double quotes
def TrimResponse =ResponseMessage.replaceAll('^\"|\"$','').replaceAll('/\\/','')
//define a JsonSlurper
def jsonSlurper = new JsonSlurper().parseText(TrimResponse)
//verify the response to be validated isn't empty
assert !(jsonSlurper.isEmpty())
//verify the Json response Shows Correct Values
assert jsonSlurper.status == "Healthy"
def ActualMsNames = jsonSlurper.results*.name.toString()
def ActualMsStatus = jsonSlurper.results*.status.toString()
def ActualMsDescription = jsonSlurper.results*.description.toString()
def ExpectedMsNames = "[Microservice one,Microservice two,Microservice three,Microservice four,Microservice five]"
def ExpectedMsStatus = "[Healthy, Healthy, Healthy, Healthy, Healthy]"
def ExpectedMsDescription = "[Url check MSOneURI success : status(OK),Url check MSTwoURI success : status(OK),Url check MSThreeURI success : status(OK),Url check MSFourURI success : status(OK),Url check MSFiveURI success : status(OK)]"
assert ActualMsNames==ExpectedMsNames
assert ActualMsStatus==ExpectedMsStatus
assert ActualMsDescription==ExpectedMsDescription
But I want to make it better using some kind of for loop which will parse each collection one at a time and assert the value of "name", "status" and "descriptions" at once for each child
Is that possible?
Yes, that's certainly possible.
Without knowing more about your actual data it's not possible to give a perfect example, but you could do something like:
jsonSlurper.results?.eachWithIndex { result, i ->
assert result.name == expectedNames[i]
assert result.status == expectedStatus[i] // or just "Healthy" if that's the only one
assert result.description == expectedDescriptions[i]
}
where expectedWhatever is a list of expected result fields. If your expected results really are based on index like in your example, then you could even calculate them within the loop, but I'm guessing that's not the case for real data.