Sync two Shopify stores using API - node.js

I'm currently working on two Shopify stores but I want to synchronize the customer accounts between these two stores.
I saw in the Shopify dev doc that there is an API to retrieve all the customers and I managed to make it work.
My problem is how can I use the JSON data returned to update my 2nd store database?

It is very easy. I did it like this:
Download all the customers from the store you consider the source. Bulk download or using cursors, does not matter.
For each customer encountered, search the other store for the customer using the customer email for example. You either get back a record or you don't. If you do, you can update it, if you don't you can create it.
Unfortunately, as an exercise in programming there are 1001 ways to do this, and we have no idea what your skills or choices are there.

Related

Where should product information live when using Stripe?

What is the right place to store things like product price, title, description when using Stripe, when many products are present?
The app I'm building will potentially have hundreds of products and I would like to easily be able to list them, paginate, search. Should this product data be duplicated both in my database and in Stripe?
The products in question are going to be courses, for an e-learning platform. They need to be integrated with the rest of my schema, so they do need to exist in my database too, but I would like to avoid duplication of certain fields, if possible. I wonder if there's a recommended approach for this.
If it were me I would cache this information in my database and sync it in Stripe as needed.
You want the product and pricing information in Stripe since it's needed for their Product and Price APIs. It's used to display detailed information in email receipts or invoices, on hosted pages like Checkout and in various reconciliation options.
You also want the information in your database since you don't want to hit Stripe's API every time you need to retrieve a product's name or pricing details though especially if you have many products.

Chrome Extension Database Storage

I am working on a page action extension and would like to store information that all users of the extension can access. The information will be key:value pairs, where the key is a web url and the value is an array of links.
I have to be able to update the database without redeploying the extension to the chrome store. What is it that I should look into using? The storage APIs seem oriented towards user data rather than data stored by the app and updated by the developer.
If you want something to be updated without deploying an updated version through CWS, you'll need to host the data yourself somewhere and have the extension query it.
Using chrome.storage.local as a cache for said data would be totally appropriate.
the question is pretty broad so ill give you some ideas Ive done before.
since you say you dont want to republish when the db changes, you need to store the data for the db yourself. this doesnt mean you need to store an actual db, just a way for the extension to get the data.
ideally, you are only adding new pairs. if so, an easy way is to store your pairs in a public google spreadsheet. the extension then remembers the last row synced and uses the row feed to get data incrementally.
there a few tricks to get right the spreadsheet sync. take a look at my github "plus for trello" for a full implementation.
this is a good way to incrementally sync, thou if the db isnt huge you could just host a csv file and get it periodically from the extension.
now that you can get the data into the extension, decide how to store it. chrome.storage.local or indexedDb should be fine thou indexedDb is usually best for later querying more complex things than just a hash table.

storing quick analytics using redis and node.js

I am new to redis and would like to store the web analytic of web site globally and per user activity .
Below is what i am stuck with.
// to get all unique ips
client.sadd('visitors',ip);
// to records hits per ip
client.hincrby('hits',ip,1);
The above so far works fine and i do get number of different ips and hit counter per ip.
the problem comes to store the activities made by each ip. i.e. Storing the link he clicked, searches he did, with datetime
Can some one please throw light on how to best manage it.
Thanks
the problem comes to store the activities made by each
You will need a separate structure for storing these.
The simplest rational structure is to have a "list of actions by session". Take a look at the sorted sets commands which provide a basic framework for creating a list of actions within a session.
This will get you something quickly. However, this is probably not what you really want. In fact redis is probably not useful for this at all.
If you want to re-trace an entire site visit you really want to connect to some sort of true analytics framework. There are dozens of website tracking tools that provide this type of functionality, so it's not really clear that building one is very efficient.

So, you're not allowed to copy data from foursquare's Venue Platform?

I was reading the API policy of foursquare Venue Platform.
"You may not use the API to to add new places to your database or alter location details for places in your database."
It raised two questions to me:
1. How would they know, if one added new places, etc. to his/ her own database?
I hear that foursquare (used to) use the google maps api, to retrieve information for locations, so does that mean it is viable to use Google Map's data to create one's own basic database?
Any help is appreciated.
Basically they are telling you - you can use our API and database to create great apps, but do not steal our know-how(the database copy). They won't probably find out if you copy few of them, but say - you make a startup based on their database which you fully copied, then they can sue you and get you in trouble....
For Google usage, refer to Google Policy here.

Why do I need a flickr api key?

Reading through the Flickr API documentation it keeps stating I require an API key to use their REST protocols. I am only building a photo viewer, gathering information available from Flickr's public photo feed (For instance, I am not planning on writing an upload script, where a API key would be required). Is there any added functionality I can get from getting a Key?
Update I answered the question below
To use the Flickr API you need to have an application key. We use this to track API usage.
Currently, commercial use of the API is allowed only with prior permission. Requests for API keys intended for commercial use are reviewed by staff. If your project is personal, artistic, free or otherwise non-commercial please don't request a commercial key. If your project is commercial, please provide sufficient detail to help us decide. Thanks!
http://www.flickr.com/services/api/misc.api_keys.html
We set up an account and got an API key. The answer to the question is, yes there is advanced functionality with an API key when creating something like a simple photo viewer. The flickr.photos.search command has many more features for reciving an rss feed of images than the Public photo feed, such as only retrieving new photos since the last feed request (via the min_upload_date attribute) or searching for "safe photos" only.
If you have a key, they can monitor your usage and make sure that everything is copacetic -- you are below request limit, etc. They can separate their stats on regular vs API usage. If they are having response time issues, they can make response a bit slower to API users in order to keep the main website responding quickly, etc.
Those are the benefits to them.
The benefits to you? If you just write a scraper, and it does something they don't like like hitting them too often, they'll block you unceremoniously for breaking their ToS.
If you only want to hit the thing a couple of times, you can get away without the Key. If you are writing a service that will hit their feed thousands of times, you want to give them the courtesy of following their rules.
Plus like Dave Webb said, the API is nicer. But that's in the eye of the beholder.
The Flickr API is very nice and easy to use and will be much easier than scraping the feed yourself.
Getting a key takes about 2 minutes - you fill in a form on the website and then email it to you.
Well, they say you need a key - you need a key, then :-) Exposing an API means you can pull data off the site way easier, it is understandable they want this under control. It is pretty much the same as with other API enabled sites.

Resources