We have a .NET application that defines a DateTime in the schema like so:
[ProtoMember(20)]public DateTime? Birthdate;
The application is able to serialize the data using protobuf-net and later upon deserialization the date is retrieved accurately as expected.
I am now trying to deserialize the same buffer in our node.js application using protobuf.js. I've defined that data point in the .proto file like so:
google.protobuf.Timestamp Birthdate = 20;
Upon decoding it the resulting Birthdate is not the same date as the original data. For example, when the date is originally 10/10/1976, the deserialized date is:
"Birthdate": {
"seconds": "4948"
}
When creating a JavaScript Date from that (new Date(4948 * 1000)), the result is 1/1/1970. What is going wrong here?
This comes down to history. protobuf-net had support for DateTime a long time before there was a well-defined Timestamp in the shared libraries. So: it invented something that would allow round-trip between protobuf-net and itself. Many years later, along comes Timestamp, and ... put simply, it uses a different storage layout. The good news is: protobuf-net can talk that dialect, but because we couldn't break existing code, it is "opt in".
If you use a Timestamp in a .proto, you can see that protobuf-net generates, for that .proto:
[global::ProtoBuf.ProtoMember(20, DataFormat = global::ProtoBuf.DataFormat.WellKnown)]
public global::System.DateTime? Birthdate { get; set; }
or more simply, to match your existing code:
[ProtoMember(20,DataFormat=DataFormat.WellKnown)]public DateTime? Birthdate;
With that in place - it should be using the same data layout tand you should get the same values. This is the recommended option if you need to exchange data between platforms. However, note that this is a change to your existing layout. If you need tips on migrating without breaking existing usage, let me know - it is possible (the short version would be "leave field 20 as the old style; add a new property that acts similarly and uses the new format - only serialize the new one, but allow the old one to deserialize").
Related
I want to use the NodaTime library to convert the date from one timezone to another like this.
string fromSystemTimeZoneId = "GMT Standard Time";
string toSystemTimeZoneId = "Central Standard Time";
TimeZoneInfo fromTimeZone = TimeZoneInfo.FindSystemTimeZoneById(fromSystemTimeZoneId);
TimeZoneInfo toTimeZone = TimeZoneInfo.FindSystemTimeZoneById(toSystemTimeZoneId);
var convertedTime = TimeZoneInfo.ConvertTime(inputDateTime, fromTimeZone, toTimeZone);
The above code works perfectly fine for me, But now I want to use IANA standard time zone (like Europe/London and America/Chicago) in place of windows OS-provided time zone ids.
I am using .net 4.7.2 and cannot upgrade the framework due to some limitations.
I have gone through this answer, But I am looking for simple few lines of code nothing complicated.
But I am looking for simple few lines of code nothing complicated.
But you're trying to do something complicated. Noda Time makes everything explicit, which means it's clear exactly what's going on - at the cost of being a little more verbose.
DateTime has no concept of a time zone itself (other than for the special cases where the Kind is Utc or Local). So TimeZoneInfo.ConvertTime has to:
Consider what instant in time is represented by inputDateTime in fromTimeZone
Work out what that instant in time looks like in outputDateTime
In Noda Time, those are are two separate operations, if you want to start and end with a LocalDateTime:
LocalDateTime inputDateTime = ...;
DateTimeZone fromTimeZone = ...;
DateTimeZone toTimeZone = ...;
// There are options for how this conversion is performed, as noted in other questions
ZonedDateTime inputDateTimeInFromTimeZone = inputDateTime.InZoneLeniently(fromTimeZone);
// This conversion is always unambiguous, because ZonedDateTime unambiguously
// refers to a single instant in time
ZonedDateTime inputDateTimeInToTimeZone = inputDateTimeInFromTimeZone.WithZone(toTimeZone);
LocalDateTime localPart = inputDateTimeInToTimeZone.LocalDateTime;
So that's basically the equivalent conversion - but you need to be explicit about how you want to handle skipped/ambiguous inputs. If you want everything in your app to use the same conversion, you can wrap that in a method that looks just like TimeZoneInfo.ConvertTime. But why not just keep things in a ZonedDateTime instead of LocalDateTime to start with? Then you don't get into the ambiguous position - other than potentially when converting user input (which is entirely separate from converting from one time zone to another).
What is the proper method to seed data into an Azure Database? Currently in development I have a seeder method that inserts the first couple of users as well as products. The Users (including admin user) username and password are hardcoded into the Seed method, is this an acceptable practice?
As far as the products are concerned, I have a json file with the product names and descriptions - which in development the seeder method iterates through and inserts the data.
To answer your question "The Users (including admin user) username and password are hardcoded into the Seed method, is this an acceptable practice?"
No you should keep your password in cleartext format, though you can keep it it encrypet mode and seed it.
In EF Core 2.1, the seeding workflow is quite different. There is now Fluent API logic to define the seed data in OnModelCreating. Then, when you create a migration, the seeding is transformed into migration commands to perform inserts, and is eventually transformed into SQL that that particular migration executes. Further migrations will know to insert more data, or even perform updates and deletes, depending on what changes you make in the OnModelCreating method.
Suppose thethree classes in my model are Magazine, Article and Author. A magazine can have one or more articles and an article can have one author. There’s also a PublicationsContext that uses SQLite as its data provider and has some basic SQL logging set up.
Let take an example of single entity type.
Let’s start by seeing what it looks like to provide seed data for a magazine—at its simplest.
The key to the new seeding feature is the HasData Fluent API method, which you can apply to an Entity in the OnModelCreating method.
Here’s the structure of the Magazine type:
public class Magazine
{
public int MagazineId { get; set; }
public string Name { get; set; }
public string Publisher { get; set; }
public List<Article> Articles { get; set; }
}
It has a key property, MagazineId, two strings and a list of Article types. Now let’s seed it with data for a single magazine:
protected override void OnModelCreating (ModelBuilder modelBuilder)
{
modelBuilder.Entity<Magazine> ().HasData
(new Magazine { MagazineId = 1, Name = "MSDN Magazine" });
}
A couple things to pay attention to here: First, I’m explicitly setting the key property, MagazineId. Second, I’m not supplying the Publisher string.
Next, I’ll add a migration, my first for this model. I happen to be using Visual Studio Code for this project, which is a .NET Core app, so I’m using the CLI migrations command, “dotnet ef migrations add init.” The resulting migration file contains all of the usual CreateTable and other relevant logic, followed by code to insert the new data, specifying the table name, columns and values:
migrationBuilder.InsertData(
table: "Magazines",
columns: new[] { "MagazineId", "Name", "Publisher" },
values: new object[] { 1, "MSDN Magazine", null });
Inserting the primary key value stands out to me here—especially after I’ve checked how the MagazineId column was defined further up in the migration file. It’s a column that should auto-increment, so you may not expect that value to be explicitly inserted:
MagazineId = table.Column<int>(nullable: false)
.Annotation("Sqlite:Autoincrement", true)
Let’s continue to see how this works out. Using the migrations script command, “dotnet ef migrations script,” to show what will be sent to the database, I can see that the primary key value will still be inserted into the key column:
INSERT INTO "Magazines" ("MagazineId", "Name", "Publisher")
VALUES (1, 'MSDN Magazine', NULL);
That’s because I’m targeting SQLite. SQLite will insert a key value if it’s provided, overriding the auto-increment. But what about with a SQL Server database, which definitely won’t do that on the fly?
I switched the context to use the SQL Server provider to investigate and saw that the SQL generated by the SQL Server provider includes logic to temporarily set IDENTITY_INSERT ON. That way, the supplied value will be inserted into the primary key column. Mystery solved!
You can use HasData to insert multiple rows at a time, though keep in mind that HasData is specific to a single entity. You can’t combine inserts to multiple tables with HasData. Here, I’m inserting two magazines at once:
modelBuilder.Entity<Magazine>()
.HasData(new Magazine{MagazineId=2, Name="New Yorker"},
new Magazine{MagazineId=3, Name="Scientific American"}
);
For a complete example , you can browse through this sample repo
Hope it helps.
So, a quick overview of what I'm doing:
We're currently storing events to Azure Table storage from a Node.js cloud service using the "azure-storage" npm module. We're storing our own timestamps for these events in storage (as opposed to using the Azure defined one).
Now, we have coded a generic storage handler script that for the moment just stores all values as strings. To save refactoring this script, I was hoping there would be a way to tweak the query instead.
So, my question is, is it possible to query by datetime where the stored value is not actually a datetime field and instead a string?
My original query included the following:
.where( "_timestamp ge datetime'?'", timestamp );
In the above code I need to somehow have the query treat _timestamp as a datetime instead of a string...
Would something like the following work, or what's the best way to do it?
.where( "datetime _timestamp ge datetime'?'", timestamp );
AFAIK, if the attribute type is String in an Azure Table, you can't convert that to DateTime. Thus you won't be able to use .where( "_timestamp ge datetime'?'", timestamp );
If you're storing your _timestamp in yyyy-MM-ddTHH:mm:ssZ format, then you could simply do a string based query like
.where( "_timestamp ge '?'", timestamp );
and that should work just fine other than the fact that this query is going to do a full table scan and will not be an optimized query. However if you're storing in some other format, you may get different results.
SO I am new to NodaTime and trying to use it to for storing timezone information using DateTimeZone object.
I came across below sample in user guide etc. which give me a nice DateTimeZone object from tzdb, which is great.
var london = DateTimeZoneProviders.Tzdb["Europe/London"];
My question is - how do I get a list of timezone strings ("Europe/London") which are used in the tzdb. I looked around, nowhere to find. Is there a standard list somewhere which I can refer to? How does this work? ex. - what is the string I should pass for EST?
Thanks!
To fetch the time zone IDs programmatically, use the Ids property in IDateTimeZoneProvider. For example, to find all zones:
var provider = DateTimeZoneProviders.Tzdb;
foreach (var id in provider.Ids)
{
var zone = provider[id];
// Use the zone
}
For Eastern Time, you probably want America/New_York.
More generally, these identifiers are the ones from IANA - and they're the ones used in most non-Windows systems.
I am using Node for fetching data from MySQL. In database, i got record like : 2013-08-13 15:44:53 . But when Node fetches from Database , it assigns value like 2013-08-19T07:54:33.000Z.
I just need to get time format as in MySQL table. Btw ( My column format is DateTime in MySQL)
In Node :
connection.query(post, function(error, results, fields) {
userSocket.emit('history :', {
'dataMode': 'history',
msg: results,
});
});
When retrieving it from the database you most likely get a Date object which is exactly what you should work with (strings are only good to display dates, but working on a string representation of a date is nothing you want to do).
If you need a certain string representation, create it based on the data stored in the Date object - or even better, get some library that adds a proper strftime-like method to its prototype.
The best choice for such a library is moment.js which allows you to do this to get the string format you want:
moment('2013-08-19T07:54:33.000Z').format('YYYY-MM-DD hh:mm:ss')
// output (in my case on a system using UTC+2 as its local timezone):
// "2013-08-19 09:54:33"
However, when sending it through a socket (which requires a string representation) it's a good idea to use the default one since you can pass it to the new Date(..) constructor on the client side and get a proper Date object again.