UDP torrent trackers not replying [closed] - bittorrent

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 5 years ago.
Improve this question
I've finally gotten to the stage of getting a response from a UDP tracker.
Here's an exmaple, that I split into an array:
[ 1, 3765366842, 1908, 0, 2, 0 ]
Action, request ID, interval, leechers, seeders, peers.
No matter which torrent I chose, I get 1/2 seeders, which I'm assuming is the server tracking me, and no peers / leechers.
Am I not using the correct info hash?
This is how I retrieve it from a magnet link:
magnet:?xt=urn:btih:9f9165d9a281a9b8e782cd5176bbcc8256fd1871&dn=Ubuntu+16.04.1+LTS+Desktop+64-bit&tr=udp%3A%2F%2Ftracker.leechers-paradise.org%3A6969&tr=udp%3A%2F%2Fzer0day.ch%3A1337&tr=udp%3A%2F%2Fopen.demonii.com%3A1337&tr=udp%3A%2F%2Ftracker.coppersurfer.tk%3A6969&tr=udp%3A%2F%2Fexodus.desync.com%3A6969
...
h = 9f9165d9a281a9b8e782cd5176bbcc8256fd1871
Now I split this into chunks of two, and parse them, for hex bytes:
bytes = [];
for (var i = 0; i < h.length; i++) bytes.push(parseInt((h[i]) + h[i++], 16));
[153, 153, 102, 221, 170, 136, 170, 187, 238, 136, 204, 85, 119, 187, 204, 136, 85, 255, 17, 119]
There's no need to encode this, so I send it along with my request.
This is the only point causing trouble, yet it seems so simple...
http://xbtt.sourceforge.net/udp_tracker_protocol.html

As the8472 says, your decoding is incorrect:
for (var i = 0; i < h.length; i++) bytes.push(parseInt((h[i]) + h[i++], 16));
i and i++ will have the same value here. (One of the reasons to avoid clever inline stuff.) You can use i and ++i, or maybe expand it all to multiple lines for readability:
for (var i = 0; i < h.length; i += 2) {
var hex = h.substr(i, 2);
bytes.push(parseInt(hex, 16));
}
And if you’re using Node, just parse it into a Buffer, which can easily be converted to an array if necessary:
var bytes = Buffer.from(h, 'hex');

9f91
should result in the first two bytes being 159, 145, so your hex-decoding is incorrect.
Beyond that you should compare your implementation with a working one through wireshark.
http://xbtt.sourceforge.net/udp_tracker_protocol.html
As was already mentioned in an answer to another question official and up-to-date specs reside at bittorrent.org, that includes the UDP tracker spec. The xbtt page is not maintained.

Related

What may be wrong about my use of SetGraphicsRootDescriptorTable in D3D12?

For 7 meshes that I would like to draw, I load 7 textures and create the corresponding SRVs in a descriptor heap. Then there's another SRV for IMGUI. There are also 3 CBVs, for triple buffer usage. So it should be like: | srv x7 | srv x1 | cbv x3| in the heap.
The problem I met is that when I called SetGraphicsRootDescriptorTable on range 0, which should be an SRV (which is the texture actually), something went wrong. Here's the code:
ID3D12DescriptorHeap* ppHeaps[] = { pCbvSrvDescriptorHeap, pSamplerDescriptorHeap };
pCommandList->SetDescriptorHeaps(_countof(ppHeaps), ppHeaps);
pCommandList->IASetPrimitiveTopology(D3D_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
pCommandList->IASetIndexBuffer(pIndexBufferViewDesc);
pCommandList->IASetVertexBuffers(0, 1, pVertexBufferViewDesc);
CD3DX12_GPU_DESCRIPTOR_HANDLE srvHandle(pCbvSrvDescriptorHeap->GetGPUDescriptorHandleForHeapStart(), indexMesh, cbvSrvDescriptorSize);
pCommandList->SetGraphicsRootDescriptorTable(0, srvHandle);
pCommandList->SetGraphicsRootDescriptorTable(1, pSamplerDescriptorHeap->GetGPUDescriptorHandleForHeapStart());
If indexMesh is 5, SetGraphicsRootDescriptorTable will cause the following error though the render output seems still good. And when indexMesh is 6, the following error will still occur and there will be another identical error except that the offset 8 turns into 9.
D3D12 ERROR: CGraphicsCommandList::SetGraphicsRootDescriptorTable: Specified GPU Descriptor Handle (ptr = 0x400750000002c0 at 8 offsetInDescriptorsFromDescriptorHeapStart) of type CBV, for Root Signature (0x0000020A516E8BF0:'m_rootSignature')'s Descriptor Table (at Parameter Index [0])'s Descriptor Range (at Range Index [0] of type D3D12_DESCRIPTOR_RANGE_TYPE_SRV) have mismatching types. All descriptors of descriptor ranges declared STATIC (not-DESCRIPTORS_VOLATILE) in a root signature must be initialized prior to being set on the command list. [ EXECUTION ERROR #646: INVALID_DESCRIPTOR_HANDLE]
That is really weird, because I suppose that the only thing that may cause this is that cbvSrvDescriptorSize is not right. It is 64, and it is set by m_device->GetDescriptorHandleIncrementSize(D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV);which I think should work. Besides, if I set it to another value such as 32, the application would crash.
So if cbvSrvDescriptorSize is right, why would the correct indexMesh cause the wrong offset of the descriptor handle? The consequence of this error is that it seems to be influencing my CBV which breaks the render output. Any suggestion would be appreciated, thanks!
Thanks for Chuck's suggestion, here's the code about the rootSig:
CD3DX12_DESCRIPTOR_RANGE1 ranges[3];
ranges[0].Init(D3D12_DESCRIPTOR_RANGE_TYPE_SRV, 4, 0, 0, D3D12_DESCRIPTOR_RANGE_FLAG_DATA_STATIC);
ranges[1].Init(D3D12_DESCRIPTOR_RANGE_TYPE_SAMPLER, 1, 0);
ranges[2].Init(D3D12_DESCRIPTOR_RANGE_TYPE_CBV, 1, 0, 0, D3D12_DESCRIPTOR_RANGE_FLAG_DATA_STATIC);
CD3DX12_ROOT_PARAMETER1 rootParameters[3];
rootParameters[0].InitAsDescriptorTable(1, &ranges[0], D3D12_SHADER_VISIBILITY_PIXEL);
rootParameters[1].InitAsDescriptorTable(1, &ranges[1], D3D12_SHADER_VISIBILITY_PIXEL);
rootParameters[2].InitAsDescriptorTable(1, &ranges[2], D3D12_SHADER_VISIBILITY_ALL);
CD3DX12_VERSIONED_ROOT_SIGNATURE_DESC rootSignatureDesc;
rootSignatureDesc.Init_1_1(_countof(rootParameters), rootParameters, 0, nullptr, D3D12_ROOT_SIGNATURE_FLAG_ALLOW_INPUT_ASSEMBLER_INPUT_LAYOUT);
ComPtr<ID3DBlob> signature;
ComPtr<ID3DBlob> error;
ThrowIfFailed(D3DX12SerializeVersionedRootSignature(&rootSignatureDesc, featureData.HighestVersion, &signature, &error));
ThrowIfFailed(m_device->CreateRootSignature(0, signature->GetBufferPointer(), signature->GetBufferSize(), IID_PPV_ARGS(&m_rootSignature)));
NAME_D3D12_OBJECT(m_rootSignature);
And here's some declarations in the pixel shader:
Texture2DArray g_textures : register(t0);
SamplerState g_sampler : register(s0);
cbuffer cb0 : register(b0)
{
float4x4 g_mWorldViewProj;
float3 g_lightPos;
float3 g_eyePos;
...
};
It's not very often I come across the exact problem I'm experiencing (my code is almost verbatim) and it's an in-progress post! Let's suffer together.
My problem turned out to be the calls to CreateConstantBufferView()/CreateShaderResourceView() - I was passing srvHeap->GetCPUDescriptorHandleForHeapStart() as the destDescriptor handle. These need to be offset to match your table layout (the offsetInDescriptorsFromTableStart param of CD3DX12_DESCRIPTOR_RANGE1).
I found it easier to just maintain one D3D12_CPU_DESCRIPTOR_HANDLE to the heap and just increment handle.ptr after every call to CreateSomethingView() which uses that heap.
CD3DX12_DESCRIPTOR_RANGE1 rangesV[1] = {{}};
CD3DX12_DESCRIPTOR_RANGE1 rangesP[1] = {{}};
// Vertex
rangesV[0].Init(D3D12_DESCRIPTOR_RANGE_TYPE_CBV, 1, 0, 0, D3D12_DESCRIPTOR_RANGE_FLAG_NONE, 0); // b0 at desc offset 0
// Pixel
rangesP[0].Init(D3D12_DESCRIPTOR_RANGE_TYPE_SRV, 1, 0, 0, D3D12_DESCRIPTOR_RANGE_FLAG_NONE, 1); // t0 at desc offset 1
CD3DX12_ROOT_PARAMETER1 rootParameters[2] = {{}};
rootParameters[0].InitAsDescriptorTable(1, &rangesV[0], D3D12_SHADER_VISIBILITY_VERTEX);
rootParameters[1].InitAsDescriptorTable(1, &rangesP[0], D3D12_SHADER_VISIBILITY_PIXEL);
D3D12_CPU_DESCRIPTOR_HANDLE srvHeapHandle = srvHeap->GetCPUDescriptorHandleForHeapStart();
// ----
device->CreateConstantBufferView(&cbvDesc, srvHeapHandle);
srvHeapHandle.ptr += device->GetDescriptorHandleIncrementSize(D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV);
// ----
device->CreateShaderResourceView(texture, &srvDesc, srvHeapHandle);
srvHeapHandle.ptr += device->GetDescriptorHandleIncrementSize(D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV);
Perhaps an enum would help keep it tidier and more maintainable, though. I'm still experimenting.

Converting NodeJS byte buffer [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I'm trying to figure out how to convert NodeJS code like this:
const buffer = new Buffer(24);
offset = buffer.writeUInt32BE(this.a, offset);
offset = buffer.writeUInt32BE(this.b, offset);
offset = buffer.writeUInt8(this.c, offset);
offset = buffer.writeUInt16BE(d, e); 1 : 0, offset);
buffer.writeInt8(this.f, offset);
to Go.
I figured I could use
buffer := make([]byte, 24)
buffer[0] = a
buffer[2] = b
but this is not working
is there a recommended way to do something like this with Go?
You should use binary.ByteOrder.
So in your case, using Big Endian, something like :
package main
import (
"encoding/binary"
)
func main() {
buffer := make([]byte, 24)
// Uint32
binary.BigEndian.PutUint32(buffer, 1)
binary.BigEndian.PutUint32(buffer[4:], 2)
// Uint8
buffer[8] = 3
// Uint16
binary.BigEndian.PutUint16(buffer[9:], 4)
// Uint8
buffer[13] = 5
}

Set lookup in Nodejs does not seem to have O(1)

I wrote a test to test the lookup speed of Set in Nodejs (v8.4).
const size = 5000000;
const lookups = 1000000;
const set = new Set();
for (let i = 0; i < size; i++) {
set.add(i);
}
const samples = [];
for (let i = 0; i < lookups; i++) {
samples.push(Math.floor(Math.random() * size));
}
const start = Date.now();
for (const key of samples) {
set.has(key);
}
console.log(`size: ${size}, time: ${Date.now() - start}`);
After running it with size = 5000, 50000, 500000, and 5000000, the result is surprising to me:
size: 5000, time: 29
size: 50000, time: 41
size: 500000, time: 81
size: 5000000, time: 130
I expected the time it takes is relatively constant. But it increases substantially as the number of items in the Set increases. Isn't the lookup supposed to be O(1)? What am I missing here?
Update 1:
After reading some comments and answers, I understand the point everyone is trying to make here. Maybe my question should be 'What is causing the increase in time?'. In hash map implementation, with the same number of lookups, the reason for increase in lookup time can only be there are more key collisions.
Update 2:
After more research, here is what I found:
V8 uses ordered hash table for both Set and Map implementation
According to this link, there are performance impact on the lookup time for ordered hash map, while unordered hash map's performance stays constant.
However, V8's ordered hash table implementation is based on this, and that doesn't seem to add any overhead to the look up time with increasing number of items.
Regardless of whether the JS Set implementation is actually O(1) or not (I'm not sure it is), you should not expect O(1) operations to result in speed that is identical across calls. It is a means of measuring the operation complexity rather than the actual throughput speed.
To demonstrate this, consider the use case of sorting an array of numbers. You can sort using array.sort which I believe is O(n * log(n)) in Node.js. You can also create a (bad, but amusing) O(n) implementation using timeouts (ignore complexity of adding to the array, etc):
// input data
let array = [
681, 762, 198, 347, 340,
73, 989, 967, 409, 752,
660, 914, 711, 153, 691,
35, 112, 907, 970, 67
];
// buffer of new
let sorted = [];
// O(n) sorting algorithm
array.forEach(function (num) {
setTimeout(sorted.push.bind(sorted, num), num);
});
// ensure sort finished
setTimeout(function () {
console.log(sorted);
}, 2000);
Of course, the first implementation is faster - but in terms of complexity, the second one is "better". The point is that you should only really be using O to estimate, it does not guarantee any specific amount of time. If you called the O(n) above with an array of 20 numbers (so the same length) but it had only two digit numbers, it would be a large execution time difference.
Stupid example, but it should hopefully support the point I'm trying to make :)
Caching and memory locality. V8's implementation of Set lookup has O(1) theoretical complexity, but real hardware has its own constraints and characteristics. Specifically, not every memory access has the same speed. Theoretical complexity analysis is only concerned with the number of operations, not the speed of each operation.
Update for updated question:
This answers your updated question! When you make many requests to a small Set, it will be likely that the CPU has cached the relevant chunks of memory, making many of the lookups faster than they would be if the data had to be retrieved from memory. There don't have to be more collisions for this effect to happen; it is simply the case that accessing a small memory region repeatedly is faster than spreading out the same number of accesses over a large memory region.
In fact, you can measure the same effect (with smaller magnitude) with an array:
const size = 5000000;
const lookups = 1000000;
const array = new Array(size);
for (let i = 0; i < size; i++) {
array[i] = 1;
}
const start = Date.now();
var result = 0;
for (var i = 0; i < lookups; i++) {
var sample = Math.floor(Math.random() * size);
result += array[sample];
}
const end = Date.now();
console.log(`size: ${size}, time: ${end - start}`);
A million lookups of random indices on a 5,000-element array will be faster than a million lookups of random indices on a 5,000,000 element array.
The reason is that for a smaller data structure, there's a greater likelihood that the random accesses will read elements that are already in the CPU's cache.
In theory you could be right, a Set could have a lookup of O(1), but the JS set definition is very specific on the algorithm. See ECMA Script definition. There is a loop over all elements included.
Try have a look at various HashSet implementation you can find for example here, there might be one with O(1) .has-speed.

Is there a way to convert a string to binary in D

I would like to write binary data to a file for an ancillary hash table operation and then read it back using stream.rawRead(). How would I go about converting a string to binary in D. I would prefer not to use any third party libraries if I can.
The built in module std.utf has methods to convert to and from the utf encodings (with utf8 being compatible with ascii).
If you want to use raw read you should write the length of the string first so when reading you know how many bytes the string is.
Side note - if your strings are ASCII, it is pretty much straightforward:
// following will not work:
// ubyte[] stringBytes = cast(ubyte[]) "Добар дан!".dup;
ubyte[] stringBytes = cast(ubyte[]) "Hello world".dup;
writeln(stringBytes);
char[] charr = cast(char[]) stringBytes;
writeln(charr);
string str = to!string(charr);
writeln(str);
Output:
[72, 101, 108, 108, 111, 32, 119, 111, 114, 108, 100]
Hello world
Hello world
As Ratched pointed out, you will need some sort of unicode conversion...
Another option is representation:
import std.stdio, std.string;
void main() {
auto s = "March";
auto a = s.representation;
a.writeln; // [77, 97, 114, 99, 104]
}
https://dlang.org/library/std/string/representation.html

Any good documentation for the cblas interface? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
Can someone recommend a good reference or tutorial for the cblas interface? Nothing comes up on google, all of the man pages I've found are for the fortran blas interface, and the pdf that came with MKL literally took ten seconds to search and wasn't helpful.
In particular, I'm curious why there is an extra parameter for row vs. column-major; can't the same operations already be achieved with the transpose flags? It seems like the extra parameter only adds complexity to already an already error-prone interface.
This article shows how to use cblas (and others) in C with a simple example: http://www.seehuhn.de/pages/linear
I have quoted the relevant part below in case the site goes down.
Using BLAS
To test the BLAS routines we want to perform a simple matrix-vector multiplication. Reading the file blas2-paper.ps.gz we find that the name of the corresponding Fortran function is DGEMV. The text blas2-paper.ps.gz also explains the meaning of the arguments to this function. In cblas.ps.gz we find that the corresponding C function name is cblas_dgemv. The following example uses this function to calculate the matrix-vector product
/ 3 1 3 \ / -1 \
| 1 5 9 | * | -1 |.
\ 2 6 5 / \ 1 /
Example file testblas.c:
#include <stdio.h>
#include <cblas.h>
double m[] = {
3, 1, 3,
1, 5, 9,
2, 6, 5
};
double x[] = {
-1, -1, 1
};
double y[] = {
0, 0, 0
};
int
main()
{
int i, j;
for (i=0; i<3; ++i) {
for (j=0; j<3; ++j) printf("%5.1f", m[i*3+j]);
putchar('\n');
}
cblas_dgemv(CblasRowMajor, CblasNoTrans, 3, 3, 1.0, m, 3,
x, 1, 0.0, y, 1);
for (i=0; i<3; ++i) printf("%5.1f\n", y[i]);
return 0;
}
To compile this program we use the following command.
cc testblas.c -o testblas -lblas -lm
The output of this test program is
3.0 1.0 3.0
1.0 5.0 9.0
2.0 6.0 5.0
-1.0
3.0
-3.0
which shows that everything worked fine and that we did not even use the transposed matrix by mistake.
The irix man page for intro_cblas is pretty good:
http://techpubs.sgi.com/library/tpl/cgi-bin/getdoc.cgi?cmd=getdoc&coll=0650&db=man&fname=3%20INTRO_CBLAS

Resources