Copy website from chrome to excel using VBA - excel

I need to copy entire website (in chrome) and put it in a specific place in excel using VBA. I'm trying to create a distance matrix so I work with files looking as this example:
{
"destination_addresses" : [
"Bratislava, Slovak republic"
],
"origin_addresses" : [
"Vienna, Austria"
],
"rows" : [
{
"elements" : [
{
"distance" : {
"text" : "79,1 km",
"value" : 79100
},
"duration" : {
"text" : "0 hours, 54 minutes",
"value" : 3240
},
"status" : "OK"
}
]
}
],
"status" : "OK"
}
After copying web content I can extract only distance and duration and insert it in a table.

Related

convert 3d geometry to gltf

** Ask through a translator.
I want to express 3d geometry in gltf. (want to use in cesium js)
Fist, convert the coordinates to ECEF coordinates.
Next, I created a gltf file.
However, it does not look neat on the preview screen. (ms code extension)
And trembling occurs when moving.
I want to know the cause.
** coordinates (epsg 4326)
-73.561001356474, 45.4966833629139, 19.9
-73.5610780383518, 45.4967314096038, 19.9
-73.5610843141094, 45.4967252603133, 19.9
-73.561001356474, 45.4966833629139, 19.9
** preview gltf (ms code extension)
enter image description here
** gltf source
{
"scenes" : [
{
"nodes" : [ 0 ]
}
],
"nodes" : [
{
"mesh" : 0
}
],
"meshes" : [
{
"name": "gltf test",
"primitives" : [
{
"attributes" : {
"POSITION" : 1
},
"indices" : 0,
"mode": 4
}
]
}
],
"buffers" : [
{
"uri" : "data:application/octet-stream;base64,AAABAAIAAAAYtZpJTBWDyiIhikritJpJSBWDyiohikrftJpJSRWDyikhiko=",
"byteLength" : 44
}
],
"bufferViews" : [
{
"buffer" : 0,
"byteOffset" : 0,
"byteLength" : 6,
"target" : 34963
},
{
"buffer" : 0,
"byteOffset" : 8,
"byteLength" : 36,
"byteStride": 12,
"target" : 34962
}
],
"accessors" : [
{
"bufferView" : 0,
"byteOffset" : 0,
"componentType" : 5123,
"count" : 3,
"type" : "SCALAR"
},
{
"bufferView" : 1,
"byteOffset" : 0,
"componentType" : 5126,
"count" : 3,
"type" : "VEC3",
"min" : [ 1267355.8953368044, -4295333.93970134, 4526225.02467026],
"max" : [ 1267363.054331189, -4295331.983019848, 4526228.767742585]
}
],
"asset" : {
"version" : "2.0"
}
}
You haven't shared how you created this file, so we can't really tell you the cause of the issue. I would recommend adding enough information to fully reproduce the problem. But importantly, the file you've shared has only three vertices. Try inspecting it in VS Code's glTF addon.

How to search value exist in array of objects of array

I have a dataset like this:
{
"_id" : ObjectId("5ede1b6c317aca326c2f18d7"),
"createdate" : ISODate("2020-06-11T18:30:00.000Z"),
"userHolder" : [
{
"time" : "12:00",
"user" : [
"5ede1ff42b3e633edc0ba10e"
]
},
{
"time" : "16:30",
"user" : []
}
],
},
{
"_id" : ObjectId("5ede1b6c317aca326c2f18d8"),
"createdate" : ISODate("2020-06-121T18:30:00.000Z"),
"userHolder" : [
{
"time" : "12:30",
"user" : [
"5ede1ff42b3e633edc0ba10f"
]
},
{
"time" : "13:00",
"user" : [
"5ede1ff42b3e633edc0ba10e"
]
},
{
"time" : "12:00",
"user" : [
"5ede1ff42b3e633edc0ba10f"
]
},
{
"time" : "16:30",
"user" : []
}
],
}
I split the half hour entry. i,e full day 48 columns on userHolder columns. Like 12:30, 13:00, 13:30 and so on. If user not have entry then that column will not create.
So if I want to search 5ede1ff42b3e633edc0ba10e this id on the complete table then how to write the query.
I tried to use >$all operator but this not works on nested structure.
There is a $elemMatch but for that query will be too large as I have to write the 48 conditions of timestamp. Expected result is query return the _id of the entry so that it will clear that these id will exist on n numbers of entry. I want the Data not count.
Any help is really appreciated for that.

All fields search [duplicate]

This question already has answers here:
MongoDB Query Help - query on values of any key in a sub-object
(3 answers)
Closed 6 years ago.
This is my data set, which is part of a bigger json code. I want to write a query, which will match all fields inside the value chain.
Dataset:
"value_chain" : {
"category" : "Source, Make & Deliver",
"hpe_level0" : "gift Chain Planning",
"hpe_level1" : "nodemand to Plan",
"hpe_level2" : "nodemand Planning",
"hpe_level3" : "nodemand Sensing"
},
Example:
If someone searches for "gift", the query should scan through all fields, and if there is a match, return the document.
This is something I tried, but didnt work
db.sw_api.find({
value_chain: { $elemMatch: { "Source, Make & Deliver" } }
})
Sounds like you need to create $text index on all the text fields first since it performs a text search on the content of the fields indexed with a text index:
db.sw_api.createIndex({
"value_chain.category" : "text",
"value_chain.hpe_level0" : "text",
"value_chain.hpe_level1" : "text",
"value_chain.hpe_level2" : "text",
"value_chain.hpe_level3" : "text"
}, { "name": "value_chain_text_idx"});
The index you create is a composite index consisting of 5 columns, and mongo will automatically create the text namespace for you by default if you don't override it. With the above, if you don't specify the index name as
db.sw_api.createIndex({
"value_chain.category" : "text",
"value_chain.hpe_level0" : "text",
"value_chain.hpe_level1" : "text",
"value_chain.hpe_level2" : "text",
"value_chain.hpe_level3" : "text"
});
there is a potential error "ns name is too long (127 byte max)" since the text index will look like this:
"you_db_name.sw_api.$value_chain.category_text_value_chain.hpe_level0_text_value_chain.hpe_level1_text_value_chain.hpe_level2_text_value_chain.hpe_level3_text"
Hence the need to give it a name which is not too long if autogenerated by mongo.
Once the index is created, a db.sw_api.getIndexes() query will show you the indexes present:
/* 1 */
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "dbname.sw_api"
},
{
"v" : 1,
"key" : {
"_fts" : "text",
"_ftsx" : 1
},
"name" : "value_chain_text_idx",
"ns" : "dbname.sw_api",
"weights" : {
"value_chain.category" : 1,
"value_chain.hpe_level0" : 1,
"value_chain.hpe_level1" : 1,
"value_chain.hpe_level2" : 1,
"value_chain.hpe_level3" : 1
},
"default_language" : "english",
"language_override" : "language",
"textIndexVersion" : 3
}
]
Once you create the index, you can then do a $text search:
db.sw_api.find({ "$text": { "$search": "gift" } })

Editable documents fields in elasticsearch

I have documents that contains a object which the attributes are editable (add/delete/edit) in runtime.
{
"testIndex" : {
"mappings" : {
"documentTest" : {
"properties" : {
"typeTestId" : {
"type" : "string",
"index" : "not_analyzed"
},
"createdDate" : {
"type" : "date",
"format" : "dateOptionalTime"
},
"designation" : {
"type" : "string",
"fields" : {
"raw" : {
"type" : "string",
"index" : "not_analyzed"
}
}
},
"id" : {
"type" : "string",
"index" : "not_analyzed"
},
"modifiedDate" : {
"type" : "date",
"format" : "dateOptionalTime"
},
"stuff" : {
"type" : "string"
},
"suggest" : {
"type" : "completion",
"analyzer" : "simple",
"payloads" : true,
"preserve_separators" : true,
"preserve_position_increments" : true,
"max_input_length" : 50,
"context" : {
"typeTestId" : {
"type" : "category",
"path" : "typeTestId",
"default" : [ ]
}
}
},
"values" : {
"properties" : {
"Att1" : {
"type" : "string"
},
"att2" : {
"type" : "string"
},
"att400" : {
"type" : "date",
"format" : "dateOptionalTime"
}
}
}
}
}
}
}
}
The field values is a object that can be edited throug typeTest, so if I change something in typeTestit should be reflected here. If i create a new field theres no problem, but it should be possible to edit or delete existing fields in typeTest. For example If I delete values.att1 all documentTest should lose these, as well as the mapping should be updated.
For what I saw, we cannot do these without reindexing. So for now my solution is to remove the fields in elastic search just like mentioned in this question and have a worker do the reindexing time to time if needed.
This does not seems to me a "solution". Is there a better way to have document of this type in elasticsearch? with this flexibility without having to reindex time to time?
You can use the Update API to delete, add or modify a field.
The issue is docs are immutable in elasticsearch, so when you make some changes with the update API it is executed in a manner mark as deleted to old one and add a new one with the updates.
The deletion and the creating the new documents is transparent to you, so you do not have to reindex or do any other thing. Down side is if you are planning to modify very large numbers of documents (like an update query to modify 5mil documents.) it will be very I/O intensive for the nodes.
BTW, this is also applies to deletions

MongoDB remove the lowest score, node.js

I am trying to remove the lowest homework score.
I tried this,
var a = db.students.find({"scores.type":"homework"}, {"scores.$":1}).sort({"scores.score":1})
but how can I remove this set of data?
I have 200 pieces of similar data below.
{
"_id" : 148,
"name" : "Carli Belvins",
"scores" : [
{
"type" : "exam",
"score" : 84.4361816750119
},
{
"type" : "quiz",
"score" : 1.702113040528119
},
{
"type" : "homework",
"score" : 22.47397850465176
},
{
"type" : "homework",
"score" : 88.48032660881387
}
]
}
you are trying to remove an element but the statement you provided is just to find it.
Use db.students.remove(<query>) instead. Full documentation here

Resources