Can we store concept value or action name values in session or is there any temporary storage?
I only found $vivContext which is available all across the session but we cant store the values there also.
Bixby doesn't support anything like this.
If you wish to store things across sessions, it's highly recommended that you use you own third-party storage solution.
By default concepts are stored across a session/conversation (mark them as transient if you do not want this behavior e.g. make them request based). However to store across sessions, you need to use an API call.
I wrote a tutorial capsule that uses an AWS dynamoDB for remote storage. It includes step-by-step instructions for how to use AWS for storing things between sessions. Please feel free to clone or fork it, and customize it for your needs.
Related
I am kind of confused about when an API is needed. I have recently created a mobile app with flutter and cloud firestore as the database where i simply queried and wrote to the database when needed. Now i am learning full stack web development and I recently watched a tutorial where he built like an Express API with GET, POST, and DELETE functionality for a simple item in the database.
Coming from a background where i just directly accessed the database i am not sure why an API in this case is necessary, is it so I wouldnt have to rewrite the queries every time? This is a very simple project so he's definitely not making a 3rd party api for other developers to use. Am i misunderstanding what an API does exactly?
It was really simple, there was one collection in a MongoDB database and he was using postman to read and write to and from the database to check if it works.
API is a standard way with which your front-end (web/mobile) stores/gets information for your application. Your front-end can/should not directly access database ever. Understand the purpose of front-end which is to just display the interface and should do minimal processing. All the application logic should be at your backend (API server) which is exposed to your frontend via API (GET, POST etc) calls. So to store an item in your database, you will write data storing logic in your backend, and expose an API end-point which when triggered will perform the storing operation. That API call should be used by your front-end to trigger the storing process. In this way your logic of storing/database or any other thing is not exposed, only the API URL is. The purpose of front-end is to be exposed whereas backend/database should never be exposed and used from front-end
May be for you, an API is not necessary. But, the use-cases of an API is a lot.
For example:
You don't have to write business logic for every platform. (iOS, Android, Web, Whatever)
Your app will be lightweight since some computation would be offloaded to server.
Your app can be reverse engineered to get secret informations. (or, Your secret algorithm may be?)
What if you need to store something in filesystem that you want share with others?
Also a good read: Why we should use REST?
In your case, you are using a pre-written SDK which knows how to connect to Firestore, does caching and updates application data when needed, and provides a standard method of reading, writing and deleting data in Firestore (with associated documentation and example data from google).
Therefore, using an API (as described for the mongoDB) is not required and is undesirable.
There are some cases where you might want to have no read or write access to a firestore collection or document, and in this case, you could write a cloud function which your app calls with parameters, that receives the data that you want to write and does some sort of checking or manipulation beyond the capabilities of cloud firestore rules (although these can get pretty sophisticated). See https://firebase.google.com/docs/firestore/security/get-started
Todd (in the video contained in this link) does a few good videos on this subject.
However, this is not really working in the same was as the API you mentioned in your question.
So in the case of using Firestore, you should use the SDK and not re-invent the wheel by creating your own API.
If you want to share photos for example, you can also store them in firebase storage and then provide a URL for other devices to access them without your app being installed.
If you want to write something to firestore which is then sent to all other users then you can use listeners on each app, and the data will be sent to the apps after it arrives at Firestore.
https://firebase.google.com/docs/firestore/query-data/listen gives an overview of this.
One thing to always look at with firebase is the cost of doing anything. Cloud functions cost more than doing a read of a firestore document.
This gives an overview of pricing for different capabilities within the firebase set of capabilities.
https://firebase.google.com/pricing
Another most important factor is coupling. To add to #Dijkstra API provides a way to decouple the logic from each other, thus allowing for more application reliability, maintainability, fault-tolerance and if required scalability.
Thus there is no right or wrong here, or the comparison of API vs DB call is in itself not justified for the fact that fetching the data from Database is the ultimate aim. Even if you use a REST API or Query a database.
The means to achieve the same can differ based on specific requirements. For example, fetching water from the well.
You can always climb down the well and fetch a bucket of water if you need 1 bucket per day and you are the only user.
But if there are many users you would want to install a pull and wheel where people use it to pour fetched water into their bucket, yet again this will depend if there are 100 users per day using or more than that. As this will not work in the case of more than 100 users.
IF the case is that an entire community of say 1000 user are going to need the water you would go with a more complex solution of installing a motorized water pump to pump out the water and supply it to the user's home via a pipeline. This solution has many benefits like fast supply, easy to use, filtered water, scheduled, etc. But the cost and effort to achieve the solution is higher as well.
All in all, It comes down to the cost-vs-benefit ratio which you and only you can chart out, for different solutions vs the particular problem, as you are the best judge of scale and future user flow.
While doing that you can ask the following question about the solution to help decide :
Is the solution satisfying the primary requirement of the problem?
How much time is it going to take to build it?
For the time we spend to build a solution, is it going to working at more than 75% or more of its capacity?
If not is there a simpler solution that I can use to satisfy the problem and scale it as the requirement increases?
HTH.
I have been reading docs and articles on pouchdb/couchdb/cloudant. I am not able to create this simple architecture in my head. I need help!
So there are many users on the app. Each user has a separate database (which I read is the approach in pouch/couch/cloudant setup).
Now lets just focus on a single user. This user has some remote data already present on our server(couchdb). He has 3 separate docs stored.
He accesses docs 1 and docs 2 from browser 1. And docs 2 and docs 3 from browser 2.
Content in both the browsers must be in sync.
Should I be using Sync api of pouchdb? But as I read, it sync's the whole database. How can I use this api to sync only a subset of the central database. Is filtered replication answer here?
And also I don't want to push both the docs in a single call. He can access docs as he needs.
What is the correct approach to implement this logic with pouch/couch databases. If you can explain with a little code, that will be great. I just need basic ideas.
Is this kind of problem easily solvable in upcoming releases of CouchDB 2.0 and PouchDB-find.
Thanks a lot!
If you take a look at the PouchDB documentation, you should see the options.doc_ids. This parameter let you setup a replication on certain document ids. In your scenario, this would be solving your problem.
I am developing an social app on iOS that have many-to-many relation, local persistency, and user interaction. I have tried using native Parse API in iOS and find it too cumbersome to do all the client-server logic. So my focus shifted to finding a syncing solution.
After some research I found AFIncrementalStore quite easy to use and it's highly integrated in CoreData. I just started to work on this and I have two questions to ask:
1) How to do the authentication process? Is it in AFRESTClient?
2) How to set up AFRESTClient to match Parse's REST API? (an example would be great!)
P.S. I also found FTASync, which seems to be another solution. Any thought on this framework?
Any general suggestion on client-server syncing solutions will be highly appreciated!
Thanks,
Lei Zhang
Back with iOS 5 Apple silently rolled out NSIncrementalStore to manage connection between APIs and persistent stores. Because I couldn't word it better myself:
NSIncrementalStore is an abstract subclass of NSPersistentStore designed to "create persistent stores which load and save data incrementally, allowing for the management of large and/or shared datasets". And while that may not sound like much, consider that nearly all of the database adapters we rely on load incrementally from large, shared data stores. What we have here is a goddamned miracle.
Source: http://nshipster.com/nsincrementalstore/
That being said, I've been working on my own NSIncrementalStore (built specifically for Parse and utilizing the Parse iOS/OS X SDK) and you're welcome to check out/use/contribute to the project at https://github.com/sbonami/PFIncrementalStore.
Take a look at this StackOverflow question and at Chris Wagner's article on raywenderlich.com.
The linked SO question has examples for how to include the authentication token with each request to Parse. So you'll just need to have the user log in first, and store their token to include it with each subsequent request.
Chris Wagner's tutorial has a sample AFHTTPClient named SDAFParseApiClient to communicate with the Parse REST API. You'd have to adapt it to be an AFRESTClient subclass, but it should give you a start.
Some other thoughts between the two solutions you're considering:
AFIncrementalStore does not allow the user to make any changes without a network connection, while FTASync keeps a full Core Data SQLite store locally and syncs changes to the server when you tell it to.
FTASync requires you to make all your synched managed objects subclasses of FTASyncParent, with extra properties for sync metadata. AFIncrementalStore keeps its metadata behind the scenes, not in your model.
FTASync appears not to be widely used and hasn't been updated in over a year; if you use it you will likely be maintaining it.
I am looking for a way to share the same data structure (which contains functions, so JSON is not an option) across all cluster instances within NodeJS. I have a data structure called 'Users' that tracks user sessions and contains functions that they have access to. I need to be able to share this datastructure across all node processes, or I need an alternative design pattern. Does anyone know of any solutions to this issue? Thanks
I realize this is old and answered, but it may be beneficial to others to note an alternative. The recommended way to handle a situation as this is to place this data structure with its functions in a separate file and require it when needed. This will essentially pull in the "code/functions" and you store (serialize/deserialize) the data itself in any data store.
There are multiple options for setting up proper IPC (inter process communication) on nodejs:
using a document/key-value storage solution like Redis (key-value) or MongoDB (NoSQL Document-Storage)
using the integrated IPC functionality of the cluster module (see send method)
Deciding which one of those solutions fits best depends on your requirements and your project setup. For our last project, i decided to use both methods:
IPC for triggering jobs and dispatching partial tasks to different nodejs instances
Redis for centralized session- and api-token management
If you are using Express, i highly recommend you use the Redis middleware connect-redis. This session middleware automatically handles centralized session management for express based applications (which also means you can store complex JS objects and have access to them from all your instances).
Are there any non-terrible ways of storing secret keys for Google App Engine? Or, at least, less terrible than checking them into source control?
In the meantime, Google added a Key Management Service: https://cloud.google.com/kms/
You could use it to encrypt your secrets before storing them in a database, or store them in source control encrypted. Only people with both 'decrypt' access to KMS and to your secrets would be able to use them.
The fact remains that people who can deploy code will always be able to get to your secrets (assuming your GAE app needs to be able to use the secrets), but there's no way around that as far as I can think of.
Not exactly an answer:
If you keep keys in the model, anyone who can deploy can read the keys from the model, and deploy again to cover their tracks. While Google lets you download code (unless you disable this feature), I think it only keeps the latest copy of each numbered version.
If you keep keys in a not-checked-in config file and disable code downloads, then only people with the keys can successfully deploy, but nobody can read the keys without sneaking a backdoor into the deployment (potentially not that difficult).
At the end of the day, anyone who can deploy can get at the keys, so the question is whether you think the risk is minimized by storing keys in the datastore (which you might make backups of, for example) or on deployer's machines.
A viable alternative might be to combine the two: Store encrypted API keys in the datastore and put the master key in a config file. This has some potentially nice features:
Attackers need both access to a copy of the datastore and a copy of the config file (and presumably developers don't make backups of the datastore on a laptop and lose it on the train).
By specifying two keys in the config file, you can do key-rollover (so attackers need a datastore/config of similar age).
With asymmetric crypto, you can make it possible for developers to add an API key to the datastore without needing to read the others.
Of course, then you're uploading crypto to Google's servers, which may or may not count as "exporting" crypto with the usual legal issues (e.g. what if Google sets up an Asia-Pacific data centre?).
There's no easy solution here. Checking keys into the repository is bad both because it checks in irrelevant configuration details and because it potentially exposes sensitive data. I generally create a configuration model for this, with exactly one entity, and set the relevant configuration options and keys on it after the first deployment (or whenever they change).
Alternately, you can check in a sample configuration file, then exclude it from version control, and keep the actual keys locally. This requires some way to distribute the keys, though, and makes it impossible for a developer to deploy unless they have the production keys (and all to easy to accidentally deploy the sample configuration file over the live one).
Three ways I can think of:
Store it in DataStore (may be base64 encode to have one more level
of indirection)
Pass it as environment variables through command-line params during deployment.
Keep a configuration file, git-ignore it and read it from server. Here this file itself can be a .py file if you are using a python deployment, so no reading & storing of .json files.
NOTE: If you are taking the conf-file route, dont store this JSON in the static public folders !
If you are using Laravel and want to store your keys in Datastore - this package can make that easy while managing performance using caching. https://github.com/tommerrett/laravel-GAE-secret-manager
Google app engine by default create credential for app engine and inject it in side the environment.
Google Cloud client libraries use a strategy called Application Default Credentials (ADC) to find your application's credentials. When your code uses a client library, the strategy checks for your credentials in the following order:
First, ADC checks to see if the environment variable GOOGLE_APPLICATION_CREDENTIALS is set. If the variable is set, ADC uses the service account file that the variable points to.
If the environment variable isn't set, ADC uses the default service account that Compute Engine, Google Kubernetes Engine, Cloud Run, App Engine, and Cloud Functions provide, for applications that run on those services.
If ADC can't use either of the above credentials, an error occurs.
So point 2 means if you grant the permissions to your service account using IAM Admin you do not have to worry about the passing json keys it will aromatically works.
eg.
Suppose your application running in App Engine Standard and it wants the access to the Google Cloud Storage. To do this you do not have to create new service account just grant the access to the ADC.
REF https://cloud.google.com/docs/authentication/production#finding_credentials_automatically