Getting separate street number and street name from Foursquare search api? - foursquare

I'm using the Foursquare's venue search API and everything is working as expected. But the response does not contain separate fields for street number and street address. It contains one field named "address" that contains both, like the example below
"location": {
"address": "180 Orchard St",
"crossStreet": "btwn Houston & Stanton St",
"lat": 40.72173744277209,
"lng": -73.98800687282996,
"labeledLatLngs": [
{
"label": "display",
"lat": 40.72173744277209,
"lng": -73.98800687282996
}
],
"distance": 8,
"postalCode": "10002",
"cc": "US",
"city": "New York",
"state": "NY",
"country": "United States",
"formattedAddress": [
"180 Orchard St (btwn Houston & Stanton St)",
"New York, NY 10002",
"United States"
]
I need to have the street number and street name separately. Is there anyway to get a response that has them separated? or the only way is for me to parse it?

Related

How to replace existing key in jsonb?

I'm trying to update a jsonb array in Postgres by replacing the entire array. It's important to note, I'm not trying to add an array to the object, but simply replace the whole thing with new values. When I try the code below, I get this error in the console
error: cannot replace existing key
I'm using Nodejs as server-side language.
server.js
//new array with new values
var address = {
"appt": appt,
"city": city,
"street": street,
"country": country,
"timezone": timezone,
"coordinates": coordinates,
"door_number": door_number,
"state_province": state_province,
"zip_postal_code": zip_postal_code
}
//query
var text = "UPDATE users SET info = JSONB_insert(info, '{address}', '" + JSON.stringify(address) + "') WHERE id=$1 RETURNING*";
var values = [userid];
//pool...[below]
users table
id(serial | info(jsonb)
And this is the object I need update
{
"dob": "1988-12-29",
"type": "seller",
"email": "eyetrinity3#test.com",
"phone": "5553766962",
"avatar": "f",
"address": [
{
"appt": "",
"city": "Brandon",
"street": "11th Street East",
"country": "Canada",
"timezone": "Eastern Standard Time",
"coordinates": [
"-99.925011",
"49.840649"
],
"door_number": "666",
"state_province": "Manitoba",
"zip_postal_code": "R7A 7B8"
}
],
"last_name": "doe",
"first_name": "john",
"date_created": "2022-11-12T19:44:36.714Z",
}
below works in db-fiddle Postgresql v15 (did not in work in v12)
specific element
update json_update_t set info['address'][0] = '{
"appt": "12",
"city": "crater",
"street": "11th Street East",
"country": "mars",
"timezone": "Eastern Standard Time",
"coordinates": [
"-99.925011",
"49.840649"
],
"door_number": "9999",
"state_province": "marsbar",
"zip_postal_code": "abc 123"
}';
whole array
update json_update_t set info['address'] = '[{
"appt": "14",
"city": "crater",
"street": "11th Street East",
"country": "mars",
"timezone": "Eastern Standard Time",
"coordinates": [
"-99.925011",
"49.840649"
],
"door_number": "9999",
"state_province": "marsbar",
"zip_postal_code": "abc 123"
}]';
I have found the answer for this. Going through some of my older apps I coded, I stumbled upon the answer. It's not JSONB_INSERT but JSONB_SET. Notice the difference. The later will replace the entire key and not insert or add to the object.
JSONB_INSERT --> insert
UPDATE users SET info = JSONB_insert(info, '{address,-1}', '" + JSON.stringify(address) + "',true) WHERE id=$1 RETURNING*
JSONB_SET --> set and replace
UPDATE users SET info = JSONB_SET(info, '{address}', '" + JSON.stringify(address) +"') WHERE id=$1 RETURNING*

Spark - Permissive mode with JSON file moves all records to corrupt column

I am trying ingest a JSON file using spark. I am applying schema manually to create Dataframe. the problem is even for a single record of schema is not matching it moves whole file (all the records) to corrupt column?
Data
[{
"RecordNumber": 2,
"Zipcode": 704,
"ZipCodeType": "STANDARD",
"City": "PASEO COSTA DEL SUR",
"State": "PR"
},
{
"RecordNumber": 10,
"Zipcode": 709,
"ZipCodeType": "STANDARD",
"City": "BDA SAN LUIS",
"State": "PR"
},{
"Zipcode": "709aa",
"ZipCodeType": "STANDARD",
"City": "BDA SAN LUIS",
"State": "PR"
}]
Code
import org.apache.spark.sql.types._
import org.apache.spark.sql.types.DataTypes._
val s = StructType(StructField("City",StringType,true) ::
StructField("RecordNumber",LongType,true) ::
StructField("State",StringType,true) ::
StructField("ZipCodeType",StringType,true) ::
StructField("Zipcode",LongType,true) ::
StructField("corrupted_record",StringType,true) ::
Nil)
val df2=spark.read.
option("multiline","true").
option("mode", "PERMISSIVE").
option("columnNameOfCorruptRecord", "corrupted_record").
schema(s).
json("/tmp/test.json")
df2.show(false)
Output
scala> df2.filter($"corrupted_record".isNotNull).show(false)
+----+------------+-----+-----------+-------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|City|RecordNumber|State|ZipCodeType|Zipcode|corrupted_record |
+----+------------+-----+-----------+-------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|null|null |null |null |null |[{
"RecordNumber": 2,
"Zipcode": 704,
"ZipCodeType": "STANDARD",
"City": "PASEO COSTA DEL SUR",
"State": "PR"
},
{
"RecordNumber": 10,
"Zipcode": 709,
"ZipCodeType": "STANDARD",
"City": "BDA SAN LUIS",
"State": "PR"
},{
"Zipcode": "709aa",
"ZipCodeType": "STANDARD",
"City": "BDA SAN LUIS",
"State": "PR"
}]
Question
As only third record has Zipcode in String while I expect it to be integer ("Zipcode": "709aa",) shouldn't only third record go to corrupted_record column and others should be parsed correctly?
You have only one record(that's because of multiline,true) which is corrupt so everything goes there.
Like it says in the documentation if you want spark to treat records separately you need to use Json lines format which will also be more scalable for bigger files because spark will be able to distribute parsing in multiple executors.

Aggregate on inner object array

I am trying to write a query by performing an aggregate on one of its properties that is an array of objects. As an example in the below json structure I want the country and the biggest airport as two columns in the output
[
{
"Country": "US",
"Airports": [
{
"Name": "Kodiak Airport",
"Area": "100"
},
{
"Name": "Homer Airport",
"Area": "87"
}
]
},
{
"Country": "Mexico",
"Airports": [
{
"Name": "Gulfport-Biloxi International Airport",
"Area": "94"
},
{
"Name": "El Paso International Airport",
"Area": "68"
}
]
}
]
so the reuslt will be 2 columns, country name and biggest airport's name as below:
Country Airport
US Kodiak Airport
Mexico Gulfport-Biloxi International Airport.
The following query returns country and the first airport's name in the array airports_s.
MyLogs_CL
| project country_s, Airports = todynamic(airports_s)
| project country_s, Airports[0].name
But I don't how to perform an aggregate on that array and return the object which has the highest area among them.

How would I call the data in this object?

I am new to web development and am about half way through a full-stack web development course. How would I go about calling the value of the data stored with the source: "Rotten Tomatoes"?
I have tried Ratings[1].Value and it does not seem to work.
var movieObject = JSON.parse(body);
console.log('Rotten Tomatoes Rating: ', movieObject.Ratings[1].Value);
Body Content:
{
"Title": "Avatar",
"Year": "2009",
"Rated": "PG-13",
"Released": "18 Dec 2009",
"Runtime": "162 min",
"Genre": "Action, Adventure, Fantasy",
"Director": "James Cameron",
"Writer": "James Cameron",
"Actors": "Sam Worthington, Zoe Saldana, Sigourney Weaver, Stephen Lang",
"Plot": "A paraplegic marine dispatched to the moon Pandora on a unique mission becomes torn between following his orders and protecting the world he feels is his home.",
"Language": "English, Spanish",
"Country": "UK, USA",
"Awards": "Won 3 Oscars. Another 85 wins & 128 nominations.",
"Poster": "https://images-na.ssl-images-amazon.com/images/M/MV5BMTYwOTEwNjAzMl5BMl5BanBnXkFtZTcwODc5MTUwMw##._V1_SX300.jpg",
"Ratings": [
{
"Source": "Internet Movie Database",
"Value": "7.8/10"
},
{
"Source": "Rotten Tomatoes",
"Value": "83%"
},
{
"Source": "Metacritic",
"Value": "83/100"
}
],
"Metascore": "83",
"imdbRating": "7.8",
"imdbVotes": "967,488",
"imdbID": "tt0499549",
"Type": "movie",
"DVD": "22 Apr 2010",
"BoxOffice": "$749,700,000",
"Production": "20th Century Fox",
"Website": "http://www.avatarmovie.com/",
"Response": "True"
}
Your call is correct, so it must be something in the setup. Best guess is that you're calling JSON.parse() on an object that has already been converted to an object.

Update inner object in arangodb

I have an object stored in arangodb which has additional inner objects, my current use case requires that I update just one of the elements.
Store Object
{
"status": "Active",
"physicalCode": "99999",
"postalCode": "999999",
"tradingCurrency": "USD",
"taxRate": "14",
"priceVatInclusive": "No",
"type": "eCommerce",
"name": "John and Sons inc",
"description": "John and Sons inc",
"createdDate": "2015-05-25T11:04:14+0200",
"modifiedDate": "2015-05-25T11:04:14+0200",
"physicalAddress": "Corner moon and space 9 station",
"postalAddress": "PO Box 44757553",
"physicalCountry": "Mars Sector 9",
"postalCountry": "Mars Sector 9",
"createdBy": "john.doe",
"modifiedBy": "john.doe",
"users": [
{
"id": "577458630580",
"username": "john.doe"
}
],
"products": [
{
"sellingPrice": "95.00",
"inStock": "10",
"name": "School Shirt Green",
"code": "SKITO2939999995",
"warehouseId": "723468998682"
},
{
"sellingPrice": "95.00",
"inStock": "5",
"name": "School Shirt Red",
"code": "SKITO245454949495",
"warehouseId": "723468998682"
},
{
"sellingPrice": "95.00",
"inStock": "10",
"discount": "5%",
"name": "School Shirt Blue",
"code": "SKITO293949495",
"warehouseId": "723468998682"
}
]
}
I want to change just one of the products stock value
{
"sellingPrice": "95.00",
"inStock": "10",
"discount": "5%",
"name": "School Shirt Blue",
"code": "SKITO293949495",
"warehouseId": "723468998682"
}
Like update store product stock less 1 where store id = x, something to this effect
FOR store IN stores
FILTER store._key == "837108415472"
FOR product IN store.products
FILTER product.code == "SKITO293949495"
UPDATE product WITH { inStock: (product.inStock - 1) } IN store.products
Apart from the above possibly it makes sense to store product as a separate document in collection store_products. I believe in NOSQL that is the best approach to reduce document size.
Found answer
here arangodb-aql-update-single-object-in-embedded-array and there
arangodb-aql-update-for-internal-field-of-object
I however believe it is best to maintain separate documents and rather use joins when retrieving. Updates easily

Resources