CNG replacement for CryptQueryObject - digital-signature

I'm interested in trying to read fields out of a digital signature. I have code that calls CryptQueryObject, then CryptMsgGetParam to get some fields and finally CertFindCertificateInStore to load the certificate.
Any hints on how to do this using the Cryptography Next Generation APIs? Microsoft tells me CryptQueryObject is deprecated but doesn't point to its replacement.

CryptDecodeObject[Ex] is not marked as deprecated. Just sayin'.
You can emulate the logic of detecting the blob type by calling CryptDecodeObjectEx in a loop with different object types to see which one doesn't error out.
That said, if you use CryptQueryObject to parse a file/data block (as opposed to detecting its type), and you have a good idea what that is, see if there's a subject type constant for your data block under https://learn.microsoft.com/en-us/windows/win32/seccrypto/constants-for-cryptencodeobject-and-cryptdecodeobject
In general, CryptoAPI functions that deal with ASN.1 data structures (certs, CSRs, CRLs and the like) are not deprecated and have no counterpart in CNG API. Maybe this one was marked as deprecated by mistake.

Related

Solana JSON RPC API parameters encoding

Is there any documentation or paper about how exactly are rust parameters types encoded via JSON RPC API interaction?
Like for ethereum: https://docs.soliditylang.org/en/v0.8.11/abi-spec.html
There are some good abstraction tools like web3js to encode simple types like integers, but i have not found any paper about how to encode arrays or structs.
It is insanely hard for me to get into solana intricacies after building ethereum dApps, so it would be great if you share any other good specifications.
TY!
So each program is unique in this regard although many follow a similar model. This is something that you must find the source code or documentation for the specific program you want to call.
If you only have the source code there is commonly, following good practices, an
instruction.rs file that identifies each of the instructions and what data they expect. It may also specify what convention of serialization is used so that would clue you in to what you need to do on the client side that submits the instruction.
In addition, for data serialization/deserialization, following good practices there may be a
state.rs or account_state.rs that shows you how the program serializes/deserializes account data.
The latter giving you the understanding of how you would need to deserialize an AccountInfo data array to see results in the client.

How to add a flatbuffer object to a new object?

I understand how to use the FlatBufferBuilder and specific type builder (e.g., MyNestedTableBuilder) to get the WIPOffset and then use that to get the finished_data buffer (&[u8]). I then have been using get_root to get an object based on the buffer, so now I have an instance of MyNestedTable. Then I need to pass that to another function and create a new table instance via a new builder, MyTable, that has the field add_my_nested_table. I cannot see how to do this without unpacking MyNestedTable and rebuilding it again (which seems very inefficient). I am sure there is a good way to do this, I just haven't found it, even after studying the generated code and API.
Generally we have a need to pass objects around and reuse them, over the network or via API calls in Rust.
MyNestedTable isn't really an object, it is a handle to data inside the serialized data (your [u8]), and any field accesses look up this data on the fly.
None of the base APIs for any of the FlatBuffers supported languages (including Rust) have code generated that allows automatic re-serializing, since that is not a frequent operation in most use cases (you already have the serialized data).
The way to do it is through the optional "object API", supported in C++ and some other languages, but not yet in Rust. As you can see, CasperN is working on such an API.
Until then, you may consider using nested_flatbuffer or some other construct to directly pass the serialized data to wherever it needs to go.

Google Datastore returns incomplete data via official client library for nodejs

Here some information about context of the problem I facing:
we have a semi-structured (JSON from node.js backend) data in datastore.
after saving an entity,
and getting a list of entities about them soon and even a while later,
returned data does not have one indexed property
I can find the entity by that property value.
I use Google Datastore via node.js client library. #google-cloud/datastore: "^2.0.0".
How it can be possible? I understood when due to eventual consistency some updates can be incompletely written etc. But when I getting same inconsistency - lack of whole property of entity saved e. g. hour ago?
I gone through scenario multiple times for same kind multiple times.
I do not have such issues with other kinds or other properties of that kind.
How I can avoid this type of issues with Google Datastore?
Answer for anyone who may encounter with such issue.
We mostly do not use any DTO (data-transfer objects) or any other wrappers for most of our kinds in this project, but for this one a DTO has been used, mostly to be sure the result objects have default values for properties omitted/absent in entity which usually happens for entities created by older version of code.
After reviewing my own code more carefully, I found a piece of code which is out of sync with other related pieces of code - there was no a line to copy this property from entity to the DTO object.
Side note: Actually all this situation remind me a story or meme about a guy who claimed he found a bug in compiler just because he was not able to find a mistake he made in his code.

Hazelcast Portable serialization

I want to use Portable serialization for objects stored in IMap to achieve:
fast indexing during insertion (without deserializing objects and
reflection)
class evolution (versioning)
Is it possible to store my classes without implementing Portableinterface?
Is it possible to store 3rd party classes like Date or BigDecimal (or with nested structure) which can not implement Portable interface, while still being indexable?
You can achieve fast indexing using Portable, yes. You'll also see benefits when you're querying on non-indexed fields since there'll be no full deserialization. VersionedPortable support versioning as well but
You must implement Portable interface
For types that doesn't supported by portable, you need to convert the data to a supported format, For date Long for example. And you need to code serialization/deserialization for each property & handle versioning yourself.
Portable is backward compatible only for read. If you update the data from an app who has a previous version, then you'll lost the new field updates done previously by an app has higher version of the Portable object.
So depends on your exact requirements, you need to chose the correct serialization format.
If versioning is not so important or you can handle it manually, but query performance is, then yes Portable make sense. But if you're planning to use versioning heavily, I would suggest using a backward/forward compatible serialization format like Google Protocol Buffers.
You can check this example to get an idea: https://github.com/gokhanoner/data-versioning-protobuf

UUIDs in CouchDB

I am wondering about the format UUIDs are by default represented in CouchDB. While the RFC 4122 describes UUIDs like 550e8400-e29b-11d4-a716-446655440000, CouchDB uses continuously chars like 3069197232055d39bc5bc39348a36417. I've searched some time in both their wiki and their documentation what this actually is, however without any result.
Do you know whether this is either a non RFC-conform format omitting all - or is this a completely different representation of the 128 bits.
The background is that I'm using Java UUIDs which are formatted as noted in the RFC. I see the advantage that the CouchDB-style is probably more handy for building internal trees, but I want to be sure to use a consistent implementation.
Technically we don't use the rfc standard for uuids as you've noticed. Version four uuids reserve something like four bits to specify the version of uuid. We also don't format them with the hyphens that are generally seen in other implementations.
CouchDB uuids are 16 random bytes formatted as hex. Roughly speaking that's a v4 uuid but not rfc compliant.
Regardless of the specifics, there's really not much of an issue in practice. You generally shouldn't try to interpret a uuid unless you're trying to do some sort out-of-band analysis. CouchDB will never interpret uuids, we only rely on the properties of randomness involved therein.
Bottom line would be to not worry about it and just treat them as strings after generation.
K I can provide some 2019 reference from the doc site: "it's in any case preferable to provide one's own uuids" -- https://docs.couchdb.org/en/latest/best-practices/documents.html?highlight=uuid
I ran slap bang into this because the (hobby) db I'm attempting as a first programming anything, deals with an application that does generate and use 4122 -compliant uuids and I was chewing my nails worrying about stripping the "-" bits out and putting them back on retrieval.
Before it hit me that the uuid that couchdb uses as the doc _id is a string not a number... doh. So I use the app's uuid generated when it creates an object to _id the document. No random duplicated uuids.

Resources