Why is ZonedDate absent in NodaTime? - nodatime

I am exploring using the NodaTime library, but am a bit perplexed that both LocalDate and ZonedDateTime types are provided, but there is no ZonedDate. Since the api appears to be carefully thought through, I suspect this is not a mere oversight. I can certainly use a ZonedDateTime with a zero time portion, but thought I would ask before going too far down that road.

Related

Strings and Strands in MoarVM

When running Raku code on Rakudo with the MoarVM backend, is there any way to print information about how a given Str is stored in memory from inside the running program? In particular, I am curious whether there's a way to see how many Strands currently make up the Str (whether via Raku introspection, NQP, or something that accesses the MoarVM level (does such a thing even exist at runtime?).
If there isn't any way to access this info at runtime, is there a way to get at it through output from one of Rakudo's command-line flags, such as --target, or --tracing? Or through a debugger?
Finally, does MoarVM manage the number of Strands in a given Str? I often hear (or say) that one of Raku's super powers is that is can index into Unicode strings in O(1) time, but I've been thinking about the pathological case, and it feels like it would be O(n). For example,
(^$n).map({~rand}).join
seems like it would create a Str with a length proportional to $n that consists of $n Strands – and, if I'm understanding the datastructure correctly, that means that into this Str would require checking the length of each Strand, for a time complexity of O(n). But I know that it's possible to flatten a Strand-ed Str; would MoarVM do something like that in this case? Or have I misunderstood something more basic?
When running Raku code on Rakudo with the MoarVM backend, is there any way to print information about how a given Str is stored in memory from inside the running program?
My educated guess is yes, as described below for App::MoarVM modules. That said, my education came from a degree I started at the Unseen University, and a wizard had me expelled for guessing too much, so...
In particular, I am curious whether there's a way to see how many Strands currently make up the Str (whether via Raku introspection, NQP, or something that accesses the MoarVM level (does such a thing even exist at runtime?).
I'm 99.99% sure strands are purely an implementation detail of the backend, and there'll be no Raku or NQP access to that information without MoarVM specific tricks. That said, read on.
If there isn't any way to access this info at runtime
I can see there is access at runtime via MoarVM.
is there a way to get at it through output from one of Rakudo's command-line flags, such as --target, or --tracing? Or through a debugger?
I'm 99.99% sure there are multiple ways.
For example, there's a bunch of strand debugging code in MoarVM's ops.c file starting with #define MVM_DEBUG_STRANDS ....
Perhaps more interesting are what appears to be a veritable goldmine of sophisticated debugging and profiling features built into MoarVM. Plus what appear to be Rakudo specific modules that drive those features, presumably via Raku code. For a dozen or so articles discussing some aspects of those features, I suggest reading timotimo's blog. Browsing github I see ongoing commits related to MoarVM's debugging features for years and on into 2021.
Finally, does MoarVM manage the number of Strands in a given Str?
Yes. I can see that the string handling code (some links are below), which was written by samcv (extremely smart and careful) and, I believe, reviewed by jnthn, has logic limiting the number of strands.
I often hear (or say) that one of Raku's super powers is that is can index into Unicode strings in O(1) time, but I've been thinking about the pathological case, and it feels like it would be O(n).
Yes, if a backend that supported strands did not manage the number of strands.
But for MoarVM I think the intent is to set an absolute upper bound with #define MVM_STRING_MAX_STRANDS 64 in MoarVM's MVMString.h file, and logic that checks against that (and other characteristics of strings; see this else if statement as an exemplar). But the logic is sufficiently complex, and my C chops sufficiently meagre, that I am nowhere near being able to express confidence in that, even if I can say that that appears to be the intent.
For example, (^$n).map({~rand}).join seems like it would create a Str with a length proportional to $n that consists of $n Strands
I'm 95% confident that the strings constructed by simple joins like that will be O(1).
This is based on me thinking that a Raku/NQP level string join operation is handled by MVM_string_join, and my attempts to understand what that code does.
But I know that it's possible to flatten a Strand-ed Str; would MoarVM do something like that in this case?
If you read the code you will find it's doing very sophisticated handling.
Or have I misunderstood something more basic?
I'm pretty sure I will have misunderstood something basic so I sure ain't gonna comment on whether you have. :)
As far as I understand it, the fact that MoarVM implements strands (aka, a concatenating two strings will only result in creation of a strand that consists of "references" to the original strings), is really that: an implementation detail.
You can implement the Raku Programming Language without needing to implement strands. Therefore there is no way to introspect this, at least to my knowledge.
There has been a PR to expose the nqp:: op that would actually concatenate strands into a single string, but that has been refused / closed: https://github.com/rakudo/rakudo/pull/3975

Fix Server Time on NodeJS and ExpressJS

Running Node JS and ExpressJS, my server time seems to be 5 hours ahead. Anyone have any suggestions?
Most of us face this problem when dealing with time issues, such as producing charts based on times and dates, so you are not alone. No matter what time your server is on, it will not match the timezone of everyone who uses the Node.js program, unless this is a private creation. Some suggestions:
You did not mention what type of server. First, you need to decide what Timezone (tz) you will be using for your Node.js program (for sanity purposes if nothing else). If you have a model that will be accessed from anywhere in the world, then people expect certain things to be in their local time, while others can be in UTC.
Your programs (modules) need to be aware of when they are using UTC or not. For instance in a DB where you have a timestamp, you need to know what time is being used for that timestamp, so calculations based on TZ can be correct afterwards.
Now the headache. For a serious management of TZ and formats, you should use moment. However, and I say this after painful experience, though moment is great, take time to understand it. I would not suggest just diving in, but take the time to understand just how to manipulate the TZ correctly for a UTC especially for your own needs. TZ in moment goes from the easy to a fairly elaborate scheme. Read up on it here.
There are some other datetime modules which are excellent, (just go to NPM and search for date time) and moment has some other plugins (like twix), so the field is covered.
Finally, be real careful with date variable declaration and avoid scoping and hoisting with date and time variables. In datetime these especially, have unseen consequences.
Hope some of the above helps.

Clearing memory in different languages for security

When studying Java I learned that Strings were not safe for storing passwords, since you can't manually clear the memory associated with them (you can't be sure they will eventually be gc'ed, interned strings may never be, and even after gc you can't be sure the physical memory contents were really wiped). Instead, I were to use char arrays, so I can zero-out them after use. I've tried to search for similar practices in other languages and platforms, but so far I couldn't find the relevant info (usually all I see are code examples of passwords stored in strings with no mention of any security issue).
I'm particularly interested in the situation with browsers. I use jQuery a lot, and my usual approach is just the set the value of a password field to an empty string and forget about it:
$(myPasswordField).val("");
But I'm not 100% convinced it is enough. I also have no idea whether or not the strings used for intermediate access are safe (for instance, when I use $.ajax to send the password to the server). As for other languages, usually I see no mention of this issue (another language I'm interested in particular is Python).
I know questions attempting to build lists are controversial, but since this deals with a common security issue that is largely overlooked, IMHO it's worth it. If I'm mistaken, I'd be happy to know just from JavaScript (in browsers) and Python then. I was also unsure whether to ask here, at security.SE or at programmers.SE, but since it involves the actual code to safely perform the task (not a conceptual question) I believe this site is the best option.
Note: in low-level languages, or languages that unambiguously support characters as primitive types, the answer should be obvious (Edit: not really obvious, as #Gabe showed in his answer below). I'm asking for those high level languages in which "everything is an object" or something like that, and also for those that perform automatic string interning behind the scenes (so you may create a security hole without realizing it, even if you're reasonably careful).
Update: according to an answer in a related question, even using char[] in Java is not guaranteed to be bulletproof (or .NET SecureString, for that matter), since the gc might move the array around so its contents might stick in the memory even after clearing (SecureString at least sticks in the same RAM address, guaranteeing clearing, but its consumers/producers might still leave traces).
I guess #NiklasB. is right, even though the vulnerability exists, the likelyhood of an exploit is low and the difficulty to prevent it is high, that might be the reason this issue is mostly ignored. I wish I could find at least some reference of this problem concerning browsers, but googling for it has been fruitless so far (does this scenario at least have a name?).
The .NET solution to this is SecureString.
A SecureString object is similar to a String object in that it has a text value. However, the value of a SecureString object is automatically encrypted, can be modified until your application marks it as read-only, and can be deleted from computer memory by either your application or the .NET Framework garbage collector.
Note that even for low-level languages like C, the answer isn't as obvious as it seems. Modern compilers can determine that you are writing to the string (zeroing it out) but never reading the values you read out, and just optimize away the zeroing. In order to prevent optimizing away the security, Windows provides SecureZeroMemory.
For Python, there's no way to do that that, according to this answer. A possibility would be using lists of characters (as length-1 strings or maybe code units as integers) instead of strings, so you can overwrite that list after use, but that would require every code that touches it to support this format (if even a single one of them creates a string with its contents, it's over).
There is also a mention to a method using ctypes, but the link is broken, so I'm unaware of its contents. This other answer also refers to it, but there's not a lot of detail.

Datatypes with haskelldb in practice (Text, UTCTime)

I just started to look into haskelldb as a more powerful companion to persistent, as I need a more powerful tool to query the database. Almost immediately I ran into difficulties with datatypes; in particular, I am using Data.Text quite extensively, UTCTime too and some custom datatypes as well. Unfortunately, although HDBC seems to support these datatypes quite well, haskelldb hides all of this and you have to write your own conversions starting from String input.
I don't want to duplicate the work already done for HDBC; what do you suggest to do in this case?
I think I will probably add an attribute getHdbcValue into the GetInstances class, so that I can write simple GetValue instances that would leverage the HDBC infrastructure; are there any better ideas? Am I missing something obvious?
(BTW: it seems to me that this library is - maybe from historical reasons - a little over-generalized; couldn't it just support hdbc..? )
I really love PostgreSQL and its rich type collection, particularly arrays. Most used of the extra PG types across projects outside Haskell is [int4], typical array of ints. Bringing the support for it to HaskellDB became one of the most exciting challenges I've had on my road to understanding Haskell, especially type-level programming (and TH/QQ too). Adding a new type to support looks kinda easy as long as it is supported by HDBC.
Hope this little patch could show how to add support for a new type. Here's the pull request for that, almost all changes needed covered here(all that is left is FlexibleInstances):
Pull Request
Main Changeset

Is SubSonic dying

I'm real interested in using SubSonic, I've downloaded it and I'm enjoying it so far, but looking at the activity on github and googlegroups it doesn't seem to be very active and looks a lot like a project that's dying. There's no videos about it on tekpub and Rob seems to be using nHibernate for all his projects these days. I don't want to focus on learning SubSonic and integrating it into my projects if it's not going to live much longer.
So my question is what's happening with subsonic development, is there a new release imminent is there lots going on behind the scenes or is it as inactive as it seems?
I get this question, it seems, if I don't pop a release every 2 months or so. I will admit I'm behind on getting 3.0.0.4 out the door - but there's some patched code that people are sending in without tests and I will not accept that - I'd rather take my time and make sure we don't push bugs (which I apparently did with 3.0.0.3).
Anyway - it's a valid question and no, SubSonic isn't "dying". The best place to see the activity is on the Github site itself:
http://github.com/subsonic/
This is one of the main reasons I chose Github, so people can see the activity. I just pulled in a number of changes and am waiting on a last one to get tweaked (there were merge conflicts).
RE your other points:
No, I'm not using NHibernate for my work. I'm using it for Kona and a screencast. I answer just about all the email I get from out group but yes, GoogleGroups is a sad thing when it comes to pruning the spam. Your best bet is to just email the group list for a question - it will get answered pretty quickly.
In terms of "death" - I need to talk about that a bit. Open Source projects are incapable of dying if they were born in a fit of inspiration and people find it useful. Both are true of SubSonic. Even if I gave up and told everyone to f-off, someone would pick it up and run with it. I do have to work, like most people, and I have to fit SubSonic into the little amounts of freetime I have between work and family. But there's no way I'd let this die - it means far too much to me.
Either way - I'm sure I'll be back here again in 6 months, answering this question again :).
I suspect since its that time of year people are on holiday/vacation so support here is reduced. I have just started using it and havent had responses to some questions and the last release was in July, so am hoping support continues.
I must state that although there isn't a new release every 2 months as Rob stated that you may get that feeling sometimes. Although there is still action on the google group and github. If anything before christmas there were more fresh faces starting to make contributions than before (even simple ones like doco) this shows me that there may be more interest than ever, its just that people are getting on with it.
My work uses Subsonic (both 2.2 and 3.0.0.3) in most projects where we have control over it. We have around 28 .net devs and they all love it (we don't get caught up in what it can't do as its not an ORM/data access say per se.
As we only use Subsonic for low level query tool and not as a data access layer i spose we're not too closely connected to it if we need to bail on it, but we are yet to have a reason too.
My point is this: Its a really really easy to use, easy to pick up, easy to modify, light weight querytool/ORM(to a lesser extent). There are few tools out there that have all these properties and yet don't lock you into a million schools of thought on things. Because of this i don't see it dieing any time soon - its too addictive a tool to have on your bat belt.
I'm an active record fan buoy and SubSonic Rocksorz My Sockorz!
Because of this i recommend SubSonic to a lot of people and will continue to. While we don't use it on extremely large projects (more for project continuity reasons like you mentioned than because it can't do the job)
Well.
I don't know how subsonic is progressing. I started use subsonic from 2007, before upgrade to subsonic3, I was pretty convenient with subnoic2. it is stable and predictable. But with subsonic3, even 3.0.0.3. It is somewhat disappointed for me. I don't want to mention the features that works. Thanks. Subsonic map table correctly. The thing I want to talk is about "Update". I tried with the code but it gave exception. After dig into the code, it is signing...
Look at my code:
FarmDB db = new FarmDB();
db.Update<UserAdornment>().Set(o => o.is_working == false)
.Where(o => o.user_name == HttpContext.Current.User.Identity.Name && o.type == userAdornment.type && o.id != userAdornment.id).Execute();
Is this correct?
After fixed the NullReferenceException some one asked which I suffered too. Each time I run this query, all my rows with user_name=currentname is set is_working to false. After checked code:
In update.cs
public Update<T> Where(Expression<Func<T, bool>> column)
{
LambdaExpression lamda = column;
Constraint c = lamda.ParseConstraint();
And check lamda.ParseConstraint();
I see, whatever how many 'where" I want to search, it only return the first one, the worse thing is after it,
//IColumn col = tbl.GetColumnByPropertyName(c.ColumnName);
//Constraint con = new Constraint(c.Condition, col.Name, col.QualifiedName, col.Name);
//con.ParameterName = col.PropertyName;
//con.ParameterValue = c.ParameterValue;
It built another constraint from previous one, but drop all the "condition" in last one.
How can it be right?
I don't look into subsonic's sourcecode too much and don't understand it how it is implemented well. But I am using subsonic3 in my project and highly depend on it to work correctly. Really hope every bug can be tested and fixed in time.

Resources