Best ways to communicate Backbone with Express.js [closed] - node.js

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I´m developing a Node.js & Express & Backbone & MongoDB app. i have a list of items fetched via url.
In that list i want to put a select to sort items by price. I don´t know wich is the best way to do this sorting, maybe i have to put links in the select with a determinate url like [URL]&sort=1 and fetch it like i´m fetching all items, or maybe i could not use the link and fetch this collection from backbone in another and optimus way?
Thanks, regards

My view on this is as follows. Unless you must, avoid doing sorting on the server, but do it on the client. It is far more responsive that way. If you have 20 items, and want to sort by name (ascending or descending), quantity in inventory (ascending or descending), or price (same thing), far better to do it on the client.
When you must, is when you have 1,000 items. You simply will not load 1,000 items in a call and then sort them. You probably retrieve around 30-50 at a time, depending on the size of each item. So you need to have a way to tell the server, "give me the following 30 items" or, more accurately, "give me 30 items beginning at point X."
There are lots of ways to do it, even within REST. In general, you are best off passing the starting point and count via query parameters, as you did. So if you are getting widgets, and your API is
GET /widget
then you would have several query fields: field (which field you are sorting on), order (one of asc or des), count (an integer) and start (which point to start, either a record ID or an index).
That gives the following:
GET /widget?field=name&count=30&start=100&order=asc // get widgets sorted by field in ascending order, starting with the 100th widget and going for 30 widgets
GET /widget?field=price&count=20&start=225&order=desc // get widgets sorted by price from highest to lowest (descending order), starting with the 100th widget and going for 20 widgets
There are variants on this, e.g. instead of start and count you might do start and end. I have seen order called sort. But the idea is the same: those four fields give you all your server needs to know to retrieve a defined set of items.

Related

how to properly paginate through an Amazon FBA orders list using SPAPI? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 days ago.
Improve this question
I've been scratching my head for a while trying to figure out how to paginate through the SPAPI orders API with FBA and it's gotten a little frustrating trying to sort it out:
For example, this URI gives me the first page of orders for the US based on a "CreatedAfter" cutoff date:
https://sellingpartnerapi-na.amazon.com/orders/v0/orders?MarketplaceIds=ATVPDKIKX0DER&CreatedAfter=2023-01-26
My assumption is that I don't need to pass along a "CreatedBefore" date in my first "hit" of the API, it's just going to give me the first page of orders, and then a "NextToken" and "CreatedBefore" at the bottom of the payload. We have tried it with and without a "CreatedBefore" in the first pass.
My assumption is these two handy pieces of info in the payload go hand-in-hand to ensure that I'm truly getting "the next page," and not just a potentially random list of order IDs that happen to also fall within the date range, so that I can be clear my API feed is pulling all orders and that I don't have to come back later and "backfill" missing orders through any variety of inelegant processes, which is the situation I happen to be in, because we can't seem to get this pagination to work as expected.
The other assumption is that I'll hit an "end" and stop getting a new "NextToken," which doesn't happen.
What happens when I try to use it as expected is that I never reach the end, and just keep getting the same second page over and over again, with a new NextToken each time.
So our code, instead of just consistently hitting the same base URI, with the additional "CreatedBefore" parameter pulled from its first payload and the "NextToken" in the header, currently queries our database to see what was the most recent Amazon Purchase Date and hits the URL again using that CreatedAfter date, which definitely leaves us with holes, so I tried backing that up by 10 seconds, and so on, and it's just kind of a major thorn in my side right now trying to sort this out.
Initially, we tried to sort this out in Postman, just feeding "NextToken" into the same API endpoint, before noticing we were just getting the same payload over and over.
I kind of explained what we ended up compromising within the current code, where we just read a fresh "created after" date from our de-serialized MAX("PurchaseDate") from the orders we've unpacked and loaded into our DB. This sort of works but we are left with major holes in the data, where we can run a report on Amazon FBA and find 40% of the Amazon Order IDs aren't in our DB, which we can force feedback in one at a time or have our code to run from our "go live date" to the present looking for new unprocessed orders within the payloads. We just don't have a precise method for safely paginating through a list of orders from "startdate" to "enddate."
According to the documentation
If NextToken is present, that will be used to retrieve the orders instead of other criteria.
Don't modify any other fields when you submit the second request. Just insert the nexttoken value that was returned by the first call and leave all the other fields the same.

How to handle drag and drop lists in mongodb/express? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I'm building a kanban board like spa with user boards that contain lists, and lists that contain cards all with drag and drop functionality using vue.js in front-end and express.js with mongoose in back-end.
I'm currently stuck on how to set up my models and sorting correctly.
Could anyone give me an example of a 3 simple mongoose models for a board, a list and a card that would work for a drag and drop functionality for lists and cards, together with sorting, routes would be awesome too?
I'm completely lost if I should use embedding or referencing, I've tried both ways but it doesn't seem right.
EDIT:
I know how trello/wakan/kanbanflow does it, I know about the websockets way, but for now I just need a very simple, basic set-up without realtime updates.
From what I gather -
Board model - contains members array with id's of users
List model - contains board id and list of cards (refs or embedded?), or just board id?
Card model - contains board id and list id?
How would I query the cards for the board view, as they have to be in their own respected lists?
I get a board by id, then use aggregation and lookup/fill boards lists, then for each of those lists i should look up/fill with cards? Sounds like a lot of querying going on, not really efficient?
For board I only really need to add members, change the title.
For list - I need to be able to re-order them and change title.
For cards - Alot of stuff going on here: title, description, card members, activities, comments etc.. I think I'll use referenced activities,comments, card members etc, but my main concern is how to handle the re-ordering and creating/deleting cards/lists with drag and drop?
https://codesandbox.io/s/jv4mj9vl33
Here's an example app sandbox to get a better understanding of what functionality i want to achieve. This is where I am so far. With this logic, I have 3 collections - boards lists and cards and earch are refereced to its parent by id.
The decision for embedding vs referencing is often influenced by the way you want to query a data.
It is also an interesting question if cards would also contain discussions. (<- but that is another discussion) because it would bring another nesting level.
What about
var schema = new mongoose.Schema({bucket:'string', position:'number', title: 'string', body: 'string',isDone:'boolean'});
var card = mongoose.model('Card', schema);
the board would be all cards grouped by bucket and ordered by position within the bucket. (For Grouping it is recommended to use the aggregate function).
Dragging within a bucket (changing) the position would be to update the positions of the cards. Moving the cards between buckets would be to update the bucket field.
When you look at trello I think they worked with websockets which pushes changes to the clients and the client reacts to the updates.
This would mean a total different model. So, the server contains all changes for a board. So you would need to model all changes as commands/events. "CARD_CREATED", CARD_MOVED_TO_BUCKET with its corresponding payload. With this, you would be able to push changes to the server and back to the client. You want to look at eventsourcing, cqrs, redux, to get more information about storing the state as a series of changes. Think about your bank account. The total is an aggregated result of all debits and credits.
So while the first approach will work for you, the second one could be interesting to look into it.

Filtering SharePoint List by Another SharePoint List

I posted this question on Stack Exchange here: (https://sharepoint.stackexchange.com/questions/249418/filtering-sharepoint-list-by-another-sharepoint-list), but just realized I should have posted it to Stack Overflow instead. Hope it's not bad form to cross-post (I'll add a link to this post in the other post).
I've been searching the forums and doing research online with no luck- apologies if this has been answered before.
I have a list with several thousand items in it. I often receive bulk update requests where I need to update several hundred of these items at a time (let's say for this example that we're using a field called "Case ID").
Here's what I've tried:
Searching cases individually, or up to three at a time in datasheet view; this is not time effective
Exporting the list and manually manipulating the data in Excel, then pasting in (and writing over) the data in the column that needs to be updated; this approach is not user friendly, is not necessarily time effective, and has potential side effects (causing errors for users currently modifying items that I am changing in bulk)
Lastly- I know I can create custom views that isolate this data; the problem is that the lists of cases I need to modify generally do not have enough commonalities to isolate them using the view filter logic
So- my guess is that I need two lists, likely connected with a web part. The first list would exist solely for the purpose of querying the second list. I would enter the Case IDs I wanted to filter by in the first list, and the second list would filter to show only the Case IDs in the first list. All items would be deleted from the first list between queries.
I'm not married to this approach- it's just my best guess. I'm open to creative and alternative approached, but the final process needs to be user friendly (business partners will be using it).
Does anyone know how I can accomplish this? I've tried to get something implemented several times over the past few years and have never been successful; posting here is my last resort before I throw in the towel.
I have SP 2013, and have SharePoint Designer; please let me know if I need to add any other information.
Thanks in advance for the support,
Chad
I'd suggest to create a JSOM application that will do all updates. It can query only items for update and do item-by-item update.

How to define a PBI that has no perceived value to the user? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I need to add an item to our product backlog list that has no (perceived) value to the users.
Context: every week we need to parse and import a TXT file our system. Now the provider decided to change the format to XML, so we need to rewrite the parsing engine.
In the end the user won't see any benefit as he'll keep getting his new data, but we still have to do this to keep importing the data.
How to add an item like this to the product backlog list?
What happens if you don't make the change? Is there value to the user in preventing that from happening? If the answer is yes, I'd recommend tying your business value statement to that. Then, you can write a typical user story with business value and treat it like any other PBI.
It has no value to the user, but it has value to your company.
As company X I want to be able to support the new XML format so that I can keep importing data from provider Y.
How does that sound like? Not all stories necessarily evolve around the end user.
Note: technical stories and technical improvement stories are not a good practice and they should avoided. Why? Because you can't prioritize them correctly as they have no estimable value.
The correct way to do tech stories is to include them in the definition of done. For example: decide that every new story played is only complete once database access is via Dapper and not L2S. This is a viable DoD definition and makes sure you can evolve your system appropriately.
We typically just add it as a "technical improvement" and give it a priority that we think fits. If the user asks you about it, you just explain them high level what the change does and why it's needed.
Don't forget that your application will most likely start failing in the future if you don't make the change. Just tell them that, and let them decide whether they want that or not.

Counts of web search hits [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I have a set of search queries in the size of approx. 10 millions. The goal is to collect the number of hits returned by a search engine for all of them. For example, Google returns about 47,500,000 for the query "stackoverflow".
The problem is that:
1- Google API is limited to 100 query per day. This is far from being useful to my task since I would have to get lots of counts.
2- I used Bing API but it does not return an accurate number. Accureate in the sense of matching the number of hits shown in Bing UI. Has anyone came across this issue before?
3- Issuing search queries to a search engine and parsing the html is one solution but it results in CAPTCHA and does not scale to this number of queries.
All I care about is that the number of hits and I am open for any suggestion.
Well, I was really hoping that someone would answer this since this is something that I also was interested in finding out but since it doesn't look like anyone will I will throw in these suggestions.
You could set up a series of proxies that change their IP every 100 requests so that you can query google as seemingly different people (seems like a lot of work). Or you can download wikipedia and write something to parse the data there so that when you search a term you can see how many pages it falls in. Of course that is a much smaller dataset than the whole web but it should get you started. Another possible data source is the google n-grams data which you can download and parse to see how many books and pages the search terms fall in. Maybe a combination of these methods could boost the accuracy on any given search term.
Certainly none of these methods are as good as if you could just get the google page counts directly but understandably that is data they don't want to give out for free.
I see this is a very old question but I was trying to do the same thing which brought me here. I'll add some info and my progress to date:
Firstly, the reason you get an estimate that can change wildly is because search engines use probabilistic algorithms to calculate relevance. This means that during a query they do not need to examine all possible matches in order to calculate the top N hits by relevance with a fair degree of confidence. That means that when the search concludes, for a large result set, the search engine actually doesn't know the total number of hits. It has seen a representative sample though, and it can use some statistics about the terms used in your query to set an upper limit on the possible number of hits. That's why you only get an estimate for large result sets. Running the query in such a way that you got an exact count would be much more computationally intensive.
The best I've been able to achieve is to refine the estimate by tricking the search engine into looking at more results. To do this you need to go to page 2 of the results and then modify the 'first' parameter in the URL to go way higher. Doing this may allow you to find the end of the result set (this worked for me last year I'm sure although today it only worked up to the first few thousand). Even if it doesn't allow you to get to the end of the result set you will see that the estimate gets better as the query engine considers more hits.
I found Bing slightly easier to use in the above way - but I was still unable to get an exact count for the site I was considering. Google seems to be actively preventing this use of their engine which isn't that surprising. Bing also seems to hit limits although they looked more like defects.
For my use case I was able to get both search engines to fairly similar estimates (148k for Bing, 149k for Google) using the above technique. The highest hit count I was able to get from Google was 323 whereas Bing went up to 700 - both wildly inaccurate but not surprising since this is not their intended use of the product.
If you want to do it for your own site you can use the search engine's webmaster tools to view indexed page count. For other sites I think you'd need to use the search engine API (at some cost).

Resources