A socketIO handshake looks something like this :
http://localhost:3000/socket.io/?EIO=3&transport=polling&t=M5eHk0h
What is the t parameter? Can't find a explanation.
This is the timestampParam from engine.io-client. Its value is a Unique ID generated using the npm package yeast.
This is referenced in the API docs under the Socket Constructor Options (docs below). If no value is given to timestampParam when creating a new instance of a Socket, the parameter name is switched to t and assigned a value from yeast(). You can see this in the source for on Line 223 of lib/transports/polling.js
Socket constructor() Options
timestampParam (String): timestamp parameter (t)
To clarify where engine.io-client comes into play, it is a dependency of socket.io-client which, socket.io depends on. engine.io provides the actual communication layer implementation which socket.io is built upon. engine.io-client is the client portion of engine.io.
Why does socket.io use t?
As jfriend00 pointed out in the comments, t is used for cache busting. Cache busting, is a technique that prevents the browser from serving a cached resource instead of requesting the resource.
Socket.io implements cache busting with a timestamp parameter in the query string. If you assign timestampParam a value of ts then the key for the timestamp would be ts, it defaults to t if no value is assigned. By assigning this parameter a unique value created with yeast on every poll to the server, Socket.io is able to always retrieve the latest data from the server and circumvent the cache. Since polling transports would not work as expected without cache busting, timestamping is enabled by default and must be explicitly disabled.
AFAIK, the Socket.io server does not utilize the timestamp parameter for anything other than cache busting.
More about yeast()
yeast() guarantees a compressed unique ID specifically for cache busting. The README gives us some more detailed information on how yeast() works.
Yeast is a unique id generator. It has been primarily designed to generate a unique id which can be used for cache busting. A common practice for this is to use a timestamp, but there are couple of downsides when using timestamps.
The timestamp is already 13 chars long. This might not matter for 1 request but if you make hundreds of them this quickly adds up in bandwidth and processing time.
It's not unique enough. If you generate two stamps right after each other, they would be identical because the timing accuracy is limited to milliseconds.
Yeast solves both of these issues by:
Compressing the generated timestamp using a custom encode() function that returns a string representation of the number.
Seeding the id in case of collision (when the id is identical to the previous one).
To keep the strings unique it will use the . char to separate the generated stamp from the seed.
Related
I am reading bep 5 and trying to understand how a token value is generated. As I understand the token value is a randomly generated value that is used in a get_peers query for safety. This same token value would then be used in an announced_peers query to see if the same IP previously requested the same Infohash.
My question is how is this value generated exactly? It says something about an unspecified implementation - does this mean I can implement it myself (for example by using the SHA-1 value)?
I tried looking at other beps but couldn't find anything about specific rules for generating a token value, found nothing.
The token represents a write permission so that the other node may follow up with an announce request carrying that write permission.
Since the write permission is specific to an individual node providing the token it is not necessary to specify how it keeps track of valid write permissions, as there needs to be no agreement between nodes how the implementation works. For everyone else the token is just an opaque sequence of bytes, essentially a key.
Possible implementations are
(stateful) keep a hashmap mapping from supplied tokens to their expiration time and which remote IP it is valid for.
(stateless) hash a secret the remote ip, remote id and a validity-time-window-counter. then truncate the hash. bump the counter on a timer. when verifying check with the current and the previous counter.
Since a token is only valid for a few minutes and a node should also have a spam throttle it doesn't need to be high strength, just enough bits to make it impossible to brute-force. 6-8 bytes is generally enough for that purpose.
The underlying goal is to hand out a space-efficient, time-limited write permission to individual nodes in a way that other nodes can't forge.
I'm getting a lot of collisions, at least 5 on the last 100.000 generated UUID. Right now wea are checking that generated UUID with a redis instance, so if a collision happen, we can regenerate it.
The service has at least 2 instances always running.
the UUID is generated with
crypto.randomUUID()
I can only asume they are looking at this method and it does not allow a seed parameter.
I am not sure why this would happen but might have been coincidence that 5 ids were repeated. I suggest looking at other options people have tried, such as including date.now() in order to truly never get a repeat.
In UUID's npm page shows support only up to node 14, is that the problem? https://www.npmjs.com/package/uuid
I know that AAD application ID is unique in one directory (tenant). It is a guid and apparently should be unique in whole world but collisions may be. The question is: does Azure while generation AAD application ID validate whether it is unique across all others directories or not?
If you look at the official document for application property you would know application id is
The unique identifier for the application that is assigned to an
application by Azure AD. Not nullable. Read-only
How Azure Application Id Generated Uniquely:
Application Id (GUID) break down like this:
60 bits of timestamp,
48 bits of computer identifier,
14 bits of uniquifier, and
six bits are fixed
Total of 128 bits.
The goal of this algorithm is to use the combination of time and location (“space-time coordinates” for the relativity geeks out there) as the uniqueness key.
However, there’s a possibility that, for example, two GUIDs are generated in rapid succession from the same machine, so close to each other in time that the timestamp would be the same. That’s where the uniquifier comes in.
When time appears to have stood still (if two requests for a GUID are made in rapid succession) or gone backward (if the system clock is set to a new time earlier than what it was), the uniquifier is incremented so that GUIDs generated from the “second time it was five o’clock” don’t collide with those generated “the first time it was five o’clock”.
Once you see how it all works, it’s clear that you can’t just throw away part of the GUID since all the parts (well, except for the fixed parts) work together to establish the uniqueness. This is how all that works.
Note: Even sometimes network address also considered for GUID.
I would like to know if it is possible to send a block of data like 128 bytes of data to a Xively server MOTOROLA SREC for example I need this to do firmware upgrades / download images to my Arduino connected device? As far as I can see one can only get - datapoints / values ?
A value of a datapoint can be a string. Firmware updates can be implement using Xively API V2 by just storing string encoded binaries as datapoints, provided that the size is small.
You probably can make some use of timestamps for rolling back versions that did work or something similar. Also you probably want to use the datapoints endpoint so you can just grab the entire response body and no need to parse anything.
/v2/feeds/<feed_id>/datastreams/<datastream_id>/datapoints/<timestamp>.csv
I suppose, you will need implement this in the bootloader which needs to be very small and maybe you can actually skip paring the HTTP headers and only attempt to very whether the body looks right (i.e. has some magic byte that you put in there, you can also try some to checksum it. This would a little bit opportunistic, but might be okay for an experiment. You should probably add Xively device provisioning to this also, but wouldn't try implementing everything right away.
It is however quite challenging to implement reliable firmware updates and there are sever papers out there which you should read. Some suggest to make device's behaviour most primitive you can, avoid any logic and make it rely on what server tells it to do.
To actually store the firmware string you can use cURL helper.
Add first version into a new datastream
Update with a new version
I have an imap account, (e.g. some#gmail.com) and I know many libraries with which I can connect and replicate messages back to my destination. I want to achieve following,
First time, I want to download all messages (including sent folders), and when I download for the first time, I will save message with ID and UID locally in some database.
Second time, I do not want to query downloaded messages, even though their read/unread status or any flag or deleted flag is changed or they are purged.
Our aim is to download and sync every messages locally, once and only first time.
Now I know little about IMAP message that they have something called ID, UID and MessageID. ID is probably an offset in current folder, UID is numeric id in current account and MessageID is a unique string.
Now I want to know, what search I should use while querying folder, so that messages once downloaded, wont be returned back to me.
I am planning to use http://mailsystem.codeplex.com/ library, and it gives ability to Search with custom string and returns int array.
Assuming I have, a MaxID, and I want to only download messages which has ID or UID greater than MaxID. Which one should I use? UID or ID?
You should use the UID in combination with UIDVALIDITY. Both values are folder specific.
There is an informational RFC that describes how IMAP clients should do synchronization (RFC-4549, section 4.3). The text recommends issuing the following two commands:
tag1 UID FETCH <lastseenuid+1>:* <descriptors>
tag2 UID FETCH 1:<lastseenuid> FLAGS
The first command is used to fetch the required information for all unknown mails (without knowing how many mails there are). The second command is used to synchronize the flags for the already seen mails.
AFAIK this method is widely used. Therefore, many IMAP servers contain optimizations in order to provide this information quickly. Typically, the network bandwidth is the limiting factor.