Syncing icons between front and back end - node.js

i am building a notification system where i have little icons under primary image (like on facebook, where icon indicates if notification is about post react, comment or anything else), now i am planning to send string like "purchase" | "greeting" and etc. and convert them to icons on frontend. but problem here is that i have to keep separate data on both side, one on server which lists possible icon types and one on frontend which keeps track of which icon type is compatible to which icon, and they have to be in sync, which is a problem for me because i want it to be automated, while i could send icon (react-icons) to frontend from backend i think that would affect bandwidth terribly, so what is the best practice for doing this?

Related

Doing something in the background and notify user when it's done

I am using nodejs has my server and React in the frontend.
There is a menu item on the UI that says View PDF, what I need to do is to get some PDF files from my S3 bucket when users choose this menu item, put some headers etc.
I can easily put a modal screen to show users the PDF files are being generated and display a link to the PDF when it's ready.
But what technology should I use if I want to do away the modal screen, but to allow users to continue to do some other stuffs and display a say dismissable alerts with the link to the PDF when it's ready.
How difficult is that and what do I need?
Definitely take a look at socket.io.
Web sockets allow you to establish two way communication between the client and server. For your use case, this means you can send a notification to the the client from the server.
This is not too difficult to implement but will require a bit of work on both the client and the server. You can find a lot of React examples here.

Microsoft-Cognitive Face API - Verify. Is there a way to avoid pictures of pictures?

What I want to do is verify users identity in my mobile app through the smartphone webcam, with a selfie.
So I made a small web app just to test microsft azure cognitive services, using the Face API. I take 2 pictures. I get both pictures faceIds with the Face - Detect, then I compare both faceIds with Face - Verify, if they are the same person the API does respond with a true value and the confidence number, false otherwise.
The thing is, on terms of security, if I take a picture of a picture, let's say I took a selfie, then I take a picture of the selfie in cellphone with the webcam, it does detect a face, and it is my face, then I take a picture of myself with the webcam, so, when I use Face - Verify, it returns true.
So, If I want to use this as an identity verification, this is a huge security risk. I was wondering if there's a way to prevent this.
We wanted to forward a response from an engineer:
The service would not differentiate between a high quality photo or a live image. Therefore, we do not recommend the service as a single form of authentication. However, some customers have tried capturing multiple frames to verify that it is not a still image.
Another hard solution is that you can use text recognition service along with it.
eg. generate a random number in app and ask user to include this number front of camera. (maybe user can use a paper or board to show random number on screen).
So on server side you will also require to read number from the picture to verify it
There is another Recognition API from Microsoft which can detect objects. I've tested and it can detect if there is a cellphone in the picture. (If you try to access with the picture in cellphone).
The problem is if you try to access with the cellphone and you can't see the border of the cellphone.
You can ask the user to do some random actions like close left eye or smile, or something like that. And you will get this in the second face detection.

Using socket.io to send data to a specific view/id

I have a web application using NodeJS, Express, and MongoDB. In my application, I have a view, that can be seen by anyone who accesses the application. That view is rendered with a different image, depending on which a user selects to view (they do not need to be logged in) ie the view is mapView/mapId.
Now, I want something similar to notifications to occur in realtime for those that are on that page. When a specific event happens from an external source, I want to display a popup on the view to which the event belongs to. So the event may only belong to one mapView/mapId and not another mapView with a different ID. All users on the same mapView/mapId should see the notification. Remember, these are general users that do not need to be logged in.
I am researching into Socket.io because I know it is for making realtime applications. But I am wondering if this is even the right way to go. How will I send data to the correct mapView/mapId?
Check out what your server can do with rooms
The idea is that each of your connections, from a particular view, is joined to a room. Then you use socket.io from the server to send a message only to that room. And only those sockets will get the message.

How to handle tags and segments in phonegap

Im making a dashboard for my users (in app) where they can subscribe to different segments, and would just check if I understood this somewhat right.
I make segments in onesignal.com dashboard, and in one segment i make a key = "value" (e.g test)
I send a push from my server and include
"tag" => "test",
Application can now use sendTag in an event (like a button user presses), and "subscribe" to that tag in app.
Is it more or less how this system works. I really have a hard time reading it out of the docs.
You have this all correct, if you choose you can also use the tags field on the OneSignal create notification REST API POST call instead creating segments on the dashboard first.

google earth interactive elements

Is it possible to have interactive elements (e.g. polygon responding to drag and click events) in Google Earth (I specifically need Google Earth, not Google Earth Plugin!)
The documentation doesn't seem to be helpful as most of the activity has moved towards the plugin, but the project in question is using Google Earth. I know I have full access to JavaScript and WebKit inside the balloons, but can I use JavaScript to access KML elements and assign event listeners to them?
UPDATE:
Let's say I want to use Google Earth to control a web cam. The KML would show the region of the field of view of the camera. I would like to be able to drag that region, have JavaScript handle that dragging and invoke a web service which would rotate the webcam accordingly.
Directly responding to polygon click and drag events in Google Earth (outside of using the GE API and Plugin) doesn't offer you much options. Using GE API it's easy but in the Google Earth client you cannot directly respond to moving or dragging a placemark. Also, once a placemark is sent to the client, its location if moved cannot be accessed via client-side JavaScript and sent back to the server.
There are a number interactive techniques to use in KML and Google Earth, some of which might work with what you're trying to do.
You can provide controls or configuration options in HTML forms in the description balloons to customize the display or change location of web camera :
For camera control you could show up, down, left, right buttons (maybe even zoom or tilt) in the balloon description and clicking any of those buttons which calls your backend controller to move the camera. The output of the action could use NetworkLinkControl to update the KML already loaded in Google Earth.
You can consider NetworkLinks which specify a viewFormat via a backend KML generation service. You can specify a NetworkLink to refresh and report back to the backend service with the view/camera information and/or other client-side parameters if the view changes. You could respond to view changes (zoom in/out, pan, tilt, etc.) and change state accordingly. If you further constrain NetworkLink updates with onStop then you can prevent incremental updates when user is in process of moving and only send refresh updates after user has stopped moving, which presumably user is looking at something.
The viewFormat would give you access to the following client properties of Google Earth:
[lookatLon], [lookatLat], [lookatRange], [lookatTilt], [lookatHeading]
[lookatTerrainLon], [lookatTerrainLat], [lookatTerrainAlt]
[cameraLon], [cameraLat], [cameraAlt]
[horizFov], [vertFov]
[horizPixels], [vertPixels]
[terrainEnabled]

Resources