I'm using IBM's Cloudant (CouchDB) data store. I'm planning on storing dates as integers in the format YYYYMMDD instead of JavaScript Dates. Is there any CouchDB functionality that I'd be missing out on by not storing them as JavaScript Dates? Any other reason I shouldn't do this?
I've read this SO Q&A: What's the best way to store datetimes (timestamps) in CouchDB? and from that there appears to be no objections to storing dates in any format. It doesn't answer what built-in functionality might be lost.
You wouldn't be losing any functionality as you would make the date useful by processing it in a Map function as either a Secondary Index/View, Search Index or part of Cloudant Query.
The only downside is that by formatting them as such, you make it more difficult on yourself to use the JavaScript Date functions to modify the date to needs within a Map function.
Storing it as a String is an option. Might be easier to handle this way than as an Integer.
Related
I use uuid for just about every ID in my REST backend powered by node and postgres. I also plan to use validate.js to make sure the queries are formatted correctly.
In order to shorten the URLS for my application, I would like to convert all UUIDS used by my backend into URL safe strings when exposed to the REST consumer.
The problem is that, as far as I can tell there is no such setting within node-pg. And node-pg usually returns the query results as JSON objects using either strings or numbers. That makes it hard to autmatically convert them.
I could of course just go through every single rest endpoint and add code that automatically converts all the types where I know a UUID would be. But that would violate DRY and also be a hotbed for bugs.
I also could try to automatically detect strings that look like UUIDs and then just convert them, but that also seams like it may introduce lots of bugs.
One ideal solution would be some sort of custom code injection into node-pg that automatically converts uuids. Or maybe just some pg function I could use to automatically convert the uuids within the pg-queries themselves (although that would be a bit tedious).
Another Ideal solution might be some way to use validate.js to convert the outputs and inputs during the validation. But I don't know how I could do this.
So basically, what would be a good way to autmatically convert uuids in node-pg to url safe (shorter) strings without having to add a bit of code to every single endpoint?
I think this is what I want: https://github.com/brianc/node-pg-types
It lets me set custom converters for each datatype. The input will probably still have to be converted manually
In firebase cloud function I am using typescript and I am not able to get a solution to convert firestore time stamp human-readable formate currently it is showing as '063720731421.495000000'
How to solve this any idea?
A Firestore Timestamp just contains two values (seconds and nanosecond) representing an offset from unix epoch time. This is similar to the Date object that some platforms use to represent a point in time.
The Firestore APIs aren't going to do anything to help you format the date. There are lots of libraries out there for various languages and platforms that will format dates for you. It should be trivial to convert Firestore's Timestamp into something you can pass to one of these libraries. In fact, on many platforms, Timestamp has a method to convert it to a native Date object.
In the JavaScript environment, you can get a Date object by simply calling toDate() on the Timestamp. After that, a library such as moment.js can help you format it any way you want.
Rather simple question here. Using CloudSearch, how do I find an object that does NOT have a certain key/property defined.
eg. I have been storing Car objects all along without indexing their price. Now I have began indexing Car objects with their msrp... how do I find the Car objects stored without any indexed price?
(and price:null)
(and price:undefined)
and other similar 'falsy' statements and their stringified permutations all do not work.
I am using AWS sdk in Node.js.
TIA!
Niko
The option that will work without any reindexing is a range search like
(NOT (range field=price [0,}))
which matches cars with a price that is not between 0 and infinity (eg ones with no price). See this answer for a discussion of other options.
Side note: I get the impression that you may be using CloudSearch to store your data. If so, I would consider using a datastore (which are designed to store data) rather than CloudSearch (which is a search engine). For one, it'll make this sort of query much easier.
I'm trying to compute an average date difference using QueryDSL.
I created a small project to demonstrate what I'm trying to accomplish, in a simplified manner (the real query is a lot more complex, with tons of joins / where / sort clauses). We have a Customer class with a birthDate field, and we are trying to get the average age, in seconds, of our customers. We also want the maximum age, but let's focus on the average for this post.
I tried writing this query using querydsl-jpa, but it fails with an obscure error:
java.lang.NullPointerException
at org.hibernate.dialect.function.StandardAnsiSqlAggregationFunctions$AvgFunction.determineJdbcTypeCode(StandardAnsiSqlAggregationFunctions.java:106)
at org.hibernate.dialect.function.StandardAnsiSqlAggregationFunctions$AvgFunction.render(StandardAnsiSqlAggregationFunctions.java:100)
at org.hibernate.hql.internal.ast.SqlGenerator.endFunctionTemplate(SqlGenerator.java:233)
[...]
I also tried other approaches, like using NumberTemplate.create(Double.class, "{0} - {1}", DateExpression.currentDate(), customer.birthDate).avg(), but it doesn't return the correct value. If we want to get a date difference in seconds, it seems we need to find some way of calling the database-specific date/time difference functions, not just use the minus sign.
Sadly, computing a date difference doesn't seem to be possible in JPQL, so I guess querydsl-jpa has limitations there too. So we would have to write a native SQL query, or find some hack to have the QueryDsl-generated JPQL call a native database function.
JPA 2.1 added support for invoking database functions, but there is a problem: the MySQL function takes the form TIMESTAMPDIFF(SECOND, '2012-06-06 13:13:55', '2012-06-06 15:20:18'). It would probably be possible if the first parameter (SECOND) was a String, but it seems to be a reference to some kind of constant, and it seems complicated to generate JPQL with the first parameter unquoted.
QueryDSL added support for date differences, but it seems most of the code resides in the querydsl-sql project, so I'm wondering if I can benefit from this with querydsl-jpa.
Here are my questions:
Is it possible to compute the average date difference using querydsl-jpa, having it maybe call the native database functions using JPA 2.1 support (maybe using Expressions.numberTemplate())? Or are we forced to use querydsl-sql?
If we have to use querydsl-sql, how do we generate both QCustomer and SCustomer? QCustomer is currently generated from the Customer entity using the plugin "com.mysema.maven:apt-maven-plugin". If I understood correctly, I have to use a different plugin (com.querydsl:querydsl-maven-plugin) to generate the SCustomer query type?
When looking at querydsl-sql-example, I don't see any entity class, so I guess the query types are generated by QueryDSL from the database schema? Is there a way to generate the SCustomer query type from the entity instead, like we do with querydsl-jpa?
If we use querydsl-sql, is there a way to "re-use" our querydsl-jpa predicates / sorts / joins clauses in the querydsl-sql query? Or do we have to duplicate that code using querydsl-sql-specific classes?
I'm also considering creating a database function that delegates to TIMESTAMPDIFF(SECOND, x, y), but it's not very portable...
Am I missing something? Is there a simpler way of doing what I'm trying to do?
Using template expressions you should be able to inject any custom JPQL snippets into the Querydsl query. That should answer your first question.
Using both querydsl-jpa and querydsl-sql in the same project is possible, but adds some complexity.
Is it possible to transform the returned data from a Find query in MongoDB?
As an example, I have a first and last field to store a user's first and last name. In certain queries, I wish to return the first name and last initial only (e.g. 'Joe Smith' returned as 'Joe S'). In MySQL a SUBSTRING() function could be used on the field in the SELECT statement.
Are there data transformations or string functions in Mongo like there are in SQL? If so can you please provide an example of usage. If not, is there a proposed method of transforming the data aside from looping through the returned object?
It is possible to do just about anything server-side with mongodb. The reason you will usually hear "no" is you sacrifice too much speed for it to make sense under ordinary circumstances. One of the main forces behind PyMongo, Mike Dirolf with 10gen, has a good blog post on using server-side javascript with pymongo here: http://dirolf.com/2010/04/05/stored-javascript-in-mongodb-and-pymongo.html. His example is for storing a javascript function to return the sum of two fields. But you could easily modify to return the first letter of your user name field. The gist would be something like:
db.system_js.first_letter = "function (x) { return x.charAt(0); }"
Understand first, though, that mongodb is made to be really good at retrieving your data, not really good at processing it. The recommendation (see for example 50 tips and tricks for mongodb developers from Kristina Chodorow by Oreilly) is to do what Andrew tersely alluded to doing above: make a first letter column and return that instead. Any processing can be more efficiently done in the application.
But if you feel that even querying for the fullname before returning fullname[0] from your 'view' is too much of a security risk, you don't need to do everything the fastest possible way. I'd avoided map-reduce in mongodb for awhile because of all the public concerns about speed. Then I ran my first map reduce and twiddled my thumbs for .1 seconds as it processed 80,000 10k documents. I realize in the scheme of things, that's tiny. But it illustrates that just because it's bad for a massive website to take a performance hit on some server side processing, doesn't mean it would matter to you. In my case, I imagine it would take me slightly longer to migrate to Hadoop than to just eat that .1 seconds every now and then. Good luck with your site
The question you should ask yourself is why you need that data. If you need it for display purposes, do that in your view code. If you need it for query purposes, then do as Andrew suggested, and store it as an extra field on the object. Mongo doesn't provide server-side transformations (usually, and where it does, you usually don't want to use them); the answer is usually to not treat your data as you would in a relational DB, but to use the more flexible nature of the data store to pre-bake your data into the formats that you're going to be using.
If you can provide more information on how this data should be used, then we might be able to answer a little more usefully.