How to block users on realtime database in community app (android) - android-studio

I want to make a app. -> When user a blocks user b user a immediately can't see user b's posts or comments
My structure data doesn't have Users node. Because I'm making app simply.
"Comment" : {
"-MptDdCq-j5JAuqgHkGt" : {
"-Mq8JFEZ5gdJhYIpQ4Vm" : {
"content" : "1",
"timestamp" : 1638686327979,
"uid" : "",
"uimg" : "",
"uname" : ""
}
}
"Posts" : {
"-Mqsc-nqUAOWNb9_kSvl" : {
"description" : "",
"picture" : "",
"postKey" : "-Mqsc-nqUAOWNb9_kSvl",
"timeStamp" : 1639480036785,
"title" : "",
"userId" : "",
"userPhoto" : ""
},
"-MqshZSrx0aO8OPMGC2M" : {
"description" : "",
"picture" : "",
"postKey" : "-MqshZSrx0aO8OPMGC2M",
"timeStamp" : 1639481493534,
"title" : "",
"userId" : "",
"userPhoto" : ""
}
Here is my structure. Can I make a blocking function without Users node?
I learned solution by How to block users on Firebase in a social media app? for iOS
But that solution needs Users node.. Is there no other way?
I updated BlockUser node.
Here is a new structure.
"BlockUser" : {
"k1kn0JF5idhrMzuw46GarEIBgPw2" : "OMBueDmbXdQhePnVaVH2teyOGzl2",
"kVAREcjmrHgLlvOldJetBCoiLx93" : "kVAREXdQhePnVaVH2JetBCoiLx93"}
left part is user id who blocks and right part is user id who have been blocked.
Then can I make block user function? Using firebase rules.
My firebase rule
"rules": {
".read": "auth.uid != null",
".write": "auth.uid != null"}

Related

How to detect access denied/unauthorized activity logs in Azure?

My objective is to detect actions performed by users that resulted in an access denied or unauthorized error using activity logs.
To detect error I use the field "resultType" field. When it is "Failure", I know that this is an error record. I want to go one step further and filter those which are "access denied" or "unauthorized" error records.
I have considered following fields so far as potential candidates for the same, however haven't found any relevant information in them.
resultDescription
properties.statusCode
Following is the sample schema of the activity log we get on our end. The schema is such because we stream our activity log to a storage account(https://learn.microsoft.com/en-us/azure/azure-monitor/essentials/activity-log-schema#schema-from-storage-account-and-event-hubs)
When streaming the Azure Activity log to a storage account or event hub, the data >?>follows the resource log schema.
{
"callerIpAddress" : "0.0.0.0",
"resourceGroup" : "group",
"resourceId" : "dummy",
"level" : "Information",
"production" : false,
"operationName" : "MICROSOFT.WEB/DUMMY",
"ingestTime" : "time",
"resultSignature" : "Succeeded.OK",
"accountId" : "dummyId",
"identity" : {
"authorization" : {
"evidence" : {
"roleAssignmentScope" : "group",
"role" : "dummy",
"roleDefinitionId" : "dummy",
"roleAssignmentId" : "dummy",
"principalId" : "dummy",
"principalType" : "dummy"
},
"scope" : "dummy",
"action" : "dummy"
},
"claims" : {
"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier" : "dummy",
"appid" : "dummy",
"http://schemas.microsoft.com/identity/claims/objectidentifier" : "dummy"
}
},
"customerID" : "dummy",
"correlationId" : "dummy",
"time" : "dummy",
"category" : "dummy",
"resultType" : "Failure",
"resultDescription": "dummy",
"durationMs" : "dummy",
"properties" : {
"eventCategory" : "Administrative",
"statusCode" : "OK"
}
}

$merge, $match and $update in one aggregate query

I have data in a collection ex:"jobs". I am trying to copy specific data from "jobs" after every 2 hours to a new collection (which may not exist initially) and also add a new key to the copied data.
I have been trying with this query to copy the data:
db.getCollection("jobs").aggregate([{ $match: { "job_name": "UploadFile", "created_datetime" : {"$gte":"2021-08-18 12:00:00"} } },{"$merge":{into: {coll : "reports"}}}])
But after this, the count in "reports" collection is 0. Also, how can I update the documents (with an extract key "report_name") without using an extra updateMany() query?
The data in jobs collection is as shown:
{
"_id" : ObjectId("60fa8e8283dc22799134dc6f"),
"job_id" : "408a5654-9a89-4c15-82b4-b0dc894b19d7",
"job_name" : "UploadFile",
"data" : {
"path" : "share://LOCALNAS/Screenshot from 2021-07-23 10-34-34.png",
"file_name" : "Screenshot from 2021-07-23 10-34-34.png",
"parent_path" : "share://LOCALNAS",
"size" : 97710,
"md5sum" : "",
"file_uid" : "c4411f10-a745-48d0-a55d-164707b7d6c2",
"version_id" : "c3dfd31a-80ba-4de0-9115-2d9b778bcf02",
"session_id" : "c4411f10-a745-48d0-a55d-164707b7d6c2",
"resource_name" : "Screenshot from 2021-07-23 10-34-34.png",
"metadata" : {
"metadata" : {
"description" : "",
"tag_ids" : [ ]
},
"category_id" : "60eed9ea33c690a0dfc89b41",
"custom_metadata" : [ ]
},
"upload_token" : "upload_token_c5043927484e",
"upload_url" : "/mnt/share_LOCALNAS",
"vfs_action_handler_id" : "91be4282a9ad5067642cdadb75278230",
"element_type" : "file"
},
"user_id" : "60f6c507d4ba6ee28aee5723",
"node_id" : "syeda",
"state" : "COMPLETED",
"priority" : 2,
"resource_name" : "Screenshot from 2021-07-23 10-34-34.png",
"group_id" : "upload_group_0babf8b7ce0b",
"status_info" : {
"progress" : 100,
"status_msg" : "Upload Completed."
},
"error_code" : "",
"error_message" : "",
"created_datetime" : ISODate("2021-07-23T15:10:18.506Z"),
"modified_datetime" : ISODate("2021-07-23T15:10:18.506Z"),
"schema_version" : "1.0.0",
}
Your $match stage contains a condition which takes created_datetime as string while in your sample data it is an ISODate. Such condtion won't return any document, try:
{
$match: {
"job_name": "UploadFile",
"created_datetime": {
"$gte": ISODate("2021-07-01T12:00:00.000Z")
}
}
}
Mongo Playground

Query an array in MongoDB

I have this collection in MongoDB:
{
"_id" : ObjectId("5df013b10a88910018267a89"),
"StockNo" : "33598",
"Description" : "some description",
"detections" : [
{
"lastDetectedOn" : ISODate("2020-01-29T04:36:41.191+0000"),
"lastDetectedBy" : "comp-t",
"_id" : ObjectId("5e3135f68c9e930017de8aec")
},
{
"lastDetectedOn" : ISODate("2019-12-21T18:12:06.571+0000"),
"lastDetectedBy" : "comp-n",
"_id" : ObjectId("5e3135f68c9e930017de8ae9")
},
{
"lastDetectedOn" : ISODate("2020-01-29T07:36:06.910+0000"),
"lastDetectedBy" : "comp-a",
"_id" : ObjectId("5e3135f68c9e930017de8ae7")
}
],
"createdAt" : ISODate("2019-12-10T21:52:49.788+0000"),
"updatedAt" : ISODate("2020-01-29T07:36:22.950+0000"),
"__v" : NumberInt(0)
}
I want to search by StockNo and get the name of the computer that last detected it (lastDetectedBy) only if lastDetectedOn was in the last 5 minutes with Mongoose in node.js with Express.
I also have this collection:
{
"_id" : ObjectId("5df113b10d35670018267a89"),
"InvoiceNo" : "1",
"InvoiceDate" : ISODate("2020-01-14T02:18:11.196+0000"),
"InvoiceContact : "",
"isActive" : true
},
{
"_id" : ObjectId("5df013c90a88910018267a8a"),
"InvoiceNo" : "2",
"InvoiceDate" : ISODate("2020-01-14T02:18:44.279+0000"),
"InvoiceContact : "Bob Smith",
"isActive" : true
},
{
"_id" : ObjectId("5e3096bb8c9e930017dc6e20"),
"InvoiceNo" : "3",
"InvoiceDate" : ISODate("2020-01-14T02:19:50.155+0000"),
"InvoiceContact : "",
"isActive" : true
}
And I want to update all the documents with empty InvoiceContact which has been issued in the last 30 seconds (or any date range between now and sometime in the past) with isActive equals true to isActive equals false. So for example, the first record has been issued in the last 30 seconds without InvoiceContact and isActive is true so this must be updated but the next two records will remain untouched for different reasons, the second record has InvoiceContact and the third record is out of range.
First Part
var mins5 = new Date(ISODate() - 1000* 60 * 5 )
db.getCollection('user').find({$and:[
{ "StockNo":"33598"},
{"detections.lastDetectedOn" : { $gte : mins5 }}
]})
.map(function(list){
var results = [];
list.detections.forEach(function (detections){
if(detections.lastDetectedOn > mins5){
results.push(detections.lastDetectedBy);
}
})
return results;
});
Second Part could be solved by a similar query using update instead of find.

Firebase Database: Get only one child node, out of many child nodes

I am using firebase real-time database. I don't want to get all child nodes for a particular parent node, I am concerned this with a particular node, not the sibling nodes. Fetching all the sibling nodes increases my billing in firebase as extra XXX MB of data is fetched. I am using NodeJs admin library for fetching this.
Adding a sample JSON
{
"phone" : {
"shsjsj" : {
"battery" : {
"isCharging" : true,
"level" : 0.25999999046325684,
"updatedAt" : "2018-05-15 12:45:29"
},
"details" : {
"deviceCodeName" : "sailfish",
"deviceManufacturer" : "Google",
"deviceName" : "Google Pixel",
},
"downloadFiles" : {
"7bfb21ff683f8652ea390cd3a380ef53" : {
"uploadedAt" : 1526141772270,
}
},
"token" : "cgcGiH9Orbs:APA91bHDT3mI5L3N62hqUT2LojqsC0IhwntirCd6x0zz1CmVBz6CqkrbC",
"uploadServer" : {
"createdAt" : 1526221336542,
}
},
"hshssjjs" : {
"battery" : {
"isCharging" : true,
"level" : 0.25999999046325684,
"updatedAt" : "2018-05-15 12:45:29"
},
"details" : {
"deviceCodeName" : "sailfish",
"deviceManufacturer" : "Google",
"deviceName" : "Google Pixel",
},
"downloadFiles" : {
"7bfb21ff683f8652ea390cd3a380ef53" : {
"uploadedAt" : 1526141772270,
}
},
"token" : "cgcGiH9Orbs:APA91bH_oC18U56xct4dRuyw9qhI5L3N62hqUT2LojqsC0IhwntirCd6x0zz1CmVBz6CqkrbC",
"uploadServer" : {
"createdAt" : 1526221336542,
}
}
}
}
In the above sample JSON file, i want to fetch all phone->$deviceId->token. Currently, I am fetching the whole phone object, then I iterate over all the phone ID's to fetch the token. This spikes my database download usage and increases billing. I am only concerned with the token of all the devices. Siblings of the token is unnecessary.
All queries to Realtime Database fetch everything under the location requested. There is no way to limit to certain children under that location. If you want only certain children at a location, but not everything under that location, you'll have to query for each one of them separately. Or, you can restructure or duplicate your data to support the specific queries you want to perform - duplication is common for nosql type databases.

Editable documents fields in elasticsearch

I have documents that contains a object which the attributes are editable (add/delete/edit) in runtime.
{
"testIndex" : {
"mappings" : {
"documentTest" : {
"properties" : {
"typeTestId" : {
"type" : "string",
"index" : "not_analyzed"
},
"createdDate" : {
"type" : "date",
"format" : "dateOptionalTime"
},
"designation" : {
"type" : "string",
"fields" : {
"raw" : {
"type" : "string",
"index" : "not_analyzed"
}
}
},
"id" : {
"type" : "string",
"index" : "not_analyzed"
},
"modifiedDate" : {
"type" : "date",
"format" : "dateOptionalTime"
},
"stuff" : {
"type" : "string"
},
"suggest" : {
"type" : "completion",
"analyzer" : "simple",
"payloads" : true,
"preserve_separators" : true,
"preserve_position_increments" : true,
"max_input_length" : 50,
"context" : {
"typeTestId" : {
"type" : "category",
"path" : "typeTestId",
"default" : [ ]
}
}
},
"values" : {
"properties" : {
"Att1" : {
"type" : "string"
},
"att2" : {
"type" : "string"
},
"att400" : {
"type" : "date",
"format" : "dateOptionalTime"
}
}
}
}
}
}
}
}
The field values is a object that can be edited throug typeTest, so if I change something in typeTestit should be reflected here. If i create a new field theres no problem, but it should be possible to edit or delete existing fields in typeTest. For example If I delete values.att1 all documentTest should lose these, as well as the mapping should be updated.
For what I saw, we cannot do these without reindexing. So for now my solution is to remove the fields in elastic search just like mentioned in this question and have a worker do the reindexing time to time if needed.
This does not seems to me a "solution". Is there a better way to have document of this type in elasticsearch? with this flexibility without having to reindex time to time?
You can use the Update API to delete, add or modify a field.
The issue is docs are immutable in elasticsearch, so when you make some changes with the update API it is executed in a manner mark as deleted to old one and add a new one with the updates.
The deletion and the creating the new documents is transparent to you, so you do not have to reindex or do any other thing. Down side is if you are planning to modify very large numbers of documents (like an update query to modify 5mil documents.) it will be very I/O intensive for the nodes.
BTW, this is also applies to deletions

Resources