Achieving incremental CardDAV sync with Node dav client - node.js

I'm trying to write a simple node.js program to sync a few address books from a CardDAV server to a local MySQL database. I'm using the node dav client.
I know CardDAV supports only syncing changes since the last sync via sync-token and I see some references to sync tokens when I browse through the source and readme of the dav client. But, I'm very new to DAV, so I'm not 100% sure how to put it all together.
I'm guessing I need to store the sync token (and level?) the server sends back after I run a sync and then include that in my next sync request. Am I on the right track?

Building a CardDAV client is a great resource which describes how all that works, including WebDAV Sync, which is what you are looking for.
Note that a server is not required to provided WebDAV sync (and quite a few don't).
Also note that even if they support WebDAV sync, they can expire the tokens however/whenever they want (e.g. some only store a single token, or only for a limited time).
In short: do not rely on WebDAV-sync. If it is not available, or the token is expired, you need to fallback to a full, regular sync (comparing href's and etag's).
I'm guessing I need to store the sync token (and level?) the server sends back after I run a sync and then include that in my next sync request. Am I on the right track?
Yes you are on the right track. Sync-tokens are usually per collection (Depth:1, I think they can be Depth:infinity, but I'm not sure).
So you need to store it alongside the URL of the collection you are syncing.
Then in the next sync-request, you embed it into the sync-report. If the token is still valid, you get back the new/deleted/changed records. If the token was invalidated, you need to perform a full sync.
Hope that helps :-)

Related

How should I set up the auth flow when handling drive files of a user server side?

The gist of it all is that I'm trying to fetch audio metadata from a user's google drive files to store them into firebase.
At first I intended to do this locally, entirely client-sided, because my front-facing web/iOS/Android app is in flutter;
but as it turns out, there's almost no library handling audio metadata properly, and after dabbling with it, I realized I could probably get some formats (say, .wav & most RIFF-type audio files) to work, but doing an entire library to handle all kinds of audio metadata was a task significantly bigger than my original plans. Another option would be to create interfaces between C++ code and/or JS code into my Flutter application, but I'd have almost no control over that, it's not the easiest of process, and there would be possible inconsistencies between platforms.
I might make that library eventually, but in order to facilitate my work, I decided to use a server as a middleman that'd run with node and handle the file requests and metadata treatment, & also facilitate the interactions with firebase for me by making them handled by a service account.
Now, this makes me run into one issue : how to handle the google Auth.
When my user logs into my app, I get all the required auth scopes (google drive files access and write, contacts, email, etc) for my app; it goes through the consent screen and I get authenticated.
I'm still a little confused with the recommendations from google and best practices in this case, since my app, in itself, did not require an auth system outside of getting access to the google drive files through google identification, and I therefore do not have Firebase/Firestore users; I can simply store them in my (firestore) database for identification purposes (or maybe tie in the frontend flow to my firestore app to also create a user when logging in through google if that is possible. I'm currently using the google sign in
flutter package.)
To come back to my actual problem now that the situation is laid out :
Should I just transfer the auth tokens (and maybe reverify them in some ways to avoid impersonation) from my frontend app to the server through a HTTPS post request or through headers, and use them to directly query the Google Drive API (I wouldn't even need to store them outside of memory, which would be relatively safe against any attacks on the server itself), handle the files and the possibly expired token ?
Should I modify my frontend workflow so it directly grants access to my server who would handle the session rather than getting the tokens locally ?
In the first case, I would most likely simply use the users UID as identifiers for the firestore data (none of it is sensitive anyway, it would simply be playlists and some metadata). In the second case, I could probably implement a stronger security on firestore using the firestore rules,but it'd require a significant amount of refactoring and logic changes in my frontend.
In case that wasn't clear, I wish my server to make all the Drive related requests (after getting the proper authorizations from the user of course) and handle these without having to request the files locally in frontend. Both solutions (and others if available) should work, but I'm wondering what the best practice would be in the context of the Oauth2 system used by google and the fact that the authorization is transitioning between client and server and could be subject to security issues.
I'll add code/visual representations if this isn't clear enough. It is to me, but I obviously designed the mess.

How to prevent snooping by user of Mac app?

I am creating a Chromium/Electron based Mac app. The app is essentially a browser for my customers to use a web service that I have no control over. My requirement is that users of my app (who may have root access on their Mac) should not be able to view the URLs the app is visiting, and should be unable to gain access to the cookies the app is storing. Normally it is not hard to MITM yourself, or attach a debugger to an app and dump memory to see the URLs and cookies.
How can I prevent these types of leaks to the user? If it's impossible, it may be acceptable to make it very hard so that a very high level of sophistication is needed.
Your users have full control of their devices, it is not possible to securely prevent them from proxying or exploring what your client-side app does. Obfuscation would seem like an option, but in the end, the http request that leaves your app will traverse the whole OS through different layers, and your user can easily observe that, if not else then in network packets (but usually much easier).
The only way it is possible to prevent the user from knowing what's happening is if you have your own backend. The frontend app (Electron) would make a request to your backend, which in turn could make any request with any parameters without the user being aware.
Note though that your backend could still be used as a proxy or oracle just like if the user was connecting to the real service. This might or might not be a problem in your case, depending on what you actually want to achieve and why.
The app is essentially a browser for my customers to use a web service that I have no control over. My requirement is that users of my app (who may have root access on their Mac) should not be able to view the URLs the app is visiting, and should be unable to gain access to the cookies the app is storing
Basically, you cannot (you could with the appropriate infrastructure. But you lack that infrastructure).
Network communications can be secured, to a point, using HTTPS (if you can't even use that, then you're completely out of luck - users wouldn't even need root access to the Mac to sniff traffic). You need to verify the server certificate to be sure you're connecting to the correct server.
One thing you might do - effectual just against wannabes, I'm afraid - is first run a test API call on some random server and verify that the connection either fully succeeds, with the proper server identification and matching IP, if the server exists, or that it properly fails if the server never existed. Anything else would be a telltale that someone has taken over the network layer, and at that point you could connect to a different server, making different calls, and lament that the server isn't answering properly.
Strings in memory can be (air quote) protected (end air quote) by having them available only for the shortest time, and otherwise stored in a different form - you can have for example an URL and a random byte sequence with the same length, then store the sequence and the XOR of the URL and the sequence. You can then reconstruct the URL every time you need it, remembering to clear it off any app caches it might find its way into. Also, just for the lols, you can keep a baker's dozen of different URLs sprinkled in the clear throughout the code. A memory dump at that point will turn out nothing useful.
Files, of course, can be encrypted with any one of several schemes - the files residing on the same machine that has to know how to decode them makes all such schemes ultimately vulnerable, but there again, you can try and obfuscate things. I once stored some information in a ZIP file - but it was just the header of an encrypted ZIP file, with the appropriate directory entry block glued at the end. The data were actually just gzipped in the clear, there was no password whatsoever. The guys that tried to decode the file thought it was a plain encrypted Zip file with the extension changed, wasted a significant amount of time with several Zip cracking tools, and ended up owing me a beer.
More than that, there is not much that can realistically be done.
A big advantage would be in outsourcing the API calls and "cookie" maintenance to an external service that you control, e.g. on Amazon AWS or Azure or similar. Then you could employ all kinds of protection schemes (for example: all outbound API calls could be stored in an opaque object, timestamped, nonced, and encrypted with your server's public key, and the responses sent encrypted with your client's unique key). Since this is relatively simple and cost-effective, it would also be my recommendation.

Protect remote resources when served with nodejs

This is more of an architecture question involving nodejs as implementation.
I have on a folder not exposed by the webserver files that I want to offer to the user.
The way nodejs should expose the resource to the end user is via a one shot link, that once is consumed is no longer available.
The user through the entire experience should never know the real location of the file.
I'm sure this is a common architecture pattern, but I have never implemented something similar.
Looking at scalability, the resource shouldn't be copy either on HD or RAM, and if possible the solution should not relay on a DB token tracking system.
I don't necessary need a code implementation, but a detail explanation on how I should implement it
Thank you so much
Give user a cookie
Create a temporary association (in db) between cookie and a generated ID for the user (or the hash of it, if you want to be fancy)
Give user the ID
When user requests resource by ID:
Test to see if the ID (or its hash, if you want to be fancy) is in the DB
If it is, give the user the resource and destroy the association between the user and the resource ID
There's a db token tracking system. Hey, that's the only way.
One way to avoid depending on a DB, would be to maybe create a symbolic link in the filesystem (based on the token), that would be removed after a request for it. Would not work satisfactory on windows though.
Example (psuedo):
Create token (guid, or similar)
symlink guid -> actual file
once request is completed, remove symlink
However, I don't think there is a reliable way of knowing if the file was successfully downloaded, so you better prepare for that. Some sort of pingback when the file was completely downloaded is probably the most reliable way that I can think of right now.
For scalability, make sure that the symlink is on a shared file system. Clustered node.js instances on the same server, will be fine though.
If this needs to be restricted to an authenticated user, you could combine the guid with your auth token, and prepend/append it before looking for a file.

How to restrict Chrome Apps to only work on specific computers?

I'm developing a POS Client using Chrome (packaged) Apps. It will run locally on the installed computers and interact with the server via web service. This app should only run on specific computers at the stores.
I know I can go to each store and install the .crx file in which case I don't have to publish the app to Chrome Web Store. However, I want it to be published to Chrome Web Store so that I can take advantage of its auto-updating feature.
What should I do to make sure that the app can only run at the stores' computers? (I can go the the stores and setup anything needed at the first installation).
Options I have thought of:
Create some secret key and enter it to the app at the first time of running.
Build a small tool (winforms application) to generate time-based tokens and install it on the computers. The staff will need to enter the token each time opening the app.
Any better idea how to accomplish this?
You said the app needs to talk to a web service to work. That's the key to a simple approach. (Assume you don't care whether the staff acquires a nonfunctional copy of the client app.)
At startup, app checks for existence of a validation of some kind stored in chrome.storage.local. If it exists, startup continues.
If the validation is missing, the app checks for existence of a GUID stored in chrome.storage.local.
If the GUID is missing, generate and store one using something like window.crypto.getRandomValues().
Ask the server for a validation by sending the GUID and getting a response.
If a validation comes back, save it in chrome.storage.local and go back to the start of this sequence.
Otherwise tell the user to get lost.
A full-strength version of this approach would have some additional features:
Use an HMAC(GUID, secret) for the validation. I'm assuming the staff aren't tech superstars, so something simple like a boolean would probably suffice.
Optionally add a per-launch step that sends up the GUID and validation and confirms it's still valid each time.
When the validation is requested, you might prompt for the secret key you mentioned in your question. In normal cases this would be needed only at provisioning time.
In case you haven't figured it out yet, the server is now acting like a simple licensing server, so it's up to you to decide how to decide whether the validation request succeeds. Maybe it allows only N validations to exist at once, or after you're done provisioning you hardcode future validations to fail. Maybe it limits validation requests to certain IP addresses. You get to choose.
That's the gist. It's a simple DRM system that is easier to manage than the enter-secret-at-installation method, but that won't withstand an attack of more than 30 minutes (since a smart attacker will just inject another machine's GUID and HMAC validation into the duplicate machine's chrome.storage.local).

OData & Concurrency Token

I'm building an application that needs to be able to store data offline and sync it to an OData server. While reviewing the videos, one of the presenters mentioned a Concurrency Token - can this token be used to help in the sync process? If so, how do I use it and what do I need to watch out for?
For further info, my target platform is Android, so I'm unable to use the Microsoft Sync Framework.
The concurrency token, also called the ETag (as per the HTTP RFC) can be imagined as a version number of the entity instance. The server raises that every time the entity changes. The server sends this along with the entity to the client and the client can send it back to the server (in the If-Match header) when applying changes. The server will then check that the client has the latest version and if so applies the change. Otherwise it fails the update request.
It can help you with the sync process in two ways:
If you get an entity from the server and it has a differente ETag from what your client remembers, the server instance chas changed, so you may either update the client instance or perform some conflict resolution if the client made changes.
The other way would be to send all the changes from the client to the server (With the right ETags) and the server will fail all of those which already changed on the server. The client can then get the new versions from the server, resolve conflicts and try again.
Note though, that unless your ETag is one of the publicly accesible properties on the entity, there is no built-in way to filter on the ETag, so currently you can't ask the server to get you all entities changed since a given version. Although if you do control the service, you might be able to do this (have a global version number and each change raises that and each entity carries the version number of the last change made to it, then you can filter on that version property).

Resources