Compiler bug when implementing core::fmt::Write - rust

When implementing core::fmt::Write for an avr serial monitor, calling unwrap on write_str writes what looks like a compiler error to the output. Calling write_fmt in any capacity crashes, but I think these problems might be related. I'm using a custom target for avr: which is
{
"arch": "avr",
"cpu": "atmega328p",
"data-layout": "e-P1-p:16:8-i8:8-i16:8-i32:8-i64:8-f32:8-f64:8-n8-a:8",
"max-atomic-width": 0,
"env": "",
"executables": true,
"linker": "avr-gcc",
"linker-flavor": "gcc",
"linker-is-gnu": true,
"llvm-target": "avr-unknown-unknown",
"os": "unknown",
"position-independent-executables": false,
"exe-suffix": ".elf",
"eh-frame-header": false,
"pre-link-args": {
"gcc": ["-mmcu=atmega328p"]
},
"late-link-args": {
"gcc": ["-lgcc", "-lc"]
},
"target-c-int-width": "16",
"target-endian": "little",
"target-pointer-width": "16",
"vendor": "unknown"
}
When calling serial.write_str("hello world"); it prints "hello world" like normal, but if I call serial.write_str("hello world").unwrap(); it instead prints part of a longer string. The longer the message, the more of this string prints, and I think the final string is:
args.len()C:\Users\Jett\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\core\src\fmt\mod.rsunsafe precondition(s) violated: slice::from_raw_parts requires the pointer to be aligned and non-null, and the total size of the slice not to exceed 'isize::MAX'called 'Option::unwrap()' on a 'None' valueC:\Users\Jett\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\core\src\char\convert.rsC:\Users\Jett\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\core\src\str\iter.rsC:\Users\Jett\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\core\src\str\validations.rsErrorattempt to add with overflowC:\Users\Jett\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rustlib\src\rust\library\core\src\iter\traits\accum.rsunsafe precondition(s) violated: slice::from_raw_parts requires the pointer to be aligned and non-null, and the total size of the slice not to exceed 'isize::MAX'attempt to add with overflowunsafe precondition(s) violated: slice::from_raw_parts requires the pointer to be aligned and non-null, and the total size of the slice not to exceed 'isize::MAX'C:\Users\Jett\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\lib\rust
Update
Here's the implementation:
impl Write for Serial {
fn write_str(&mut self, s: &str) -> core::fmt::Result {
for c in s.chars() {
self.transmit(c as u8);
}
Ok(())
}
}

After a bit more testing, it turns out that adding lto = true to [profile.dev] in Cargo.toml fixes this bug.

Related

LLRP for Zebra FX7500 with llrpjs doesn't read tags

Using the llrpjs library for Node.js, we are attempting to read tags from the Zebra FX7500 (Motorola?). This discussion points to the RFID Reader Software Interface Control Guide pages 142-144, but does not indicate potential values to set up the device.
From what we can gather, we should issue a SET_READER_CONFIG with a custom parameter (MotoDefaultSpec = VendorIdentifier: 161, ParameterSubtype: 102, UseDefaultSpecForAutoMode: true). Do we need to include the ROSpec and/or AccessSpec values as well (are they required)? After sending the SET_READER_CONFIG message, do we still need to send the regular LLRP messages (ADD_ROSPEC, ENABLE_ROSPEC, START_ROSPEC)? Without the MotoDefaultSpec, even after sending the regular LLRP messages, sending a GET_REPORT does not retrieve tags nor does a custom message with MOTO_GET_TAG_EVENT_REPORT. They both trigger a RO_ACCESS_REPORT event message, but the tagReportData is null.
The README file for llrpjs lists "Vendor definitions support" as a TODO item. While that is somewhat vague, is it possible that the library just hasn't implemented custom LLRP extension (messages/parameters) support, which is why none of our attempts are working? The MotoDefaultSpec parameter and MOTO_GET_TAG_EVENT_REPORT are custom to the vendor/chipset. The MOTO_GET_TAG_EVENT_REPORT custom message seems to trigger a RO_ACCESS_REPORT similar to the base LLRP GET_REPORT message, so we assume that part is working.
It is worth noting that Zebra's 123RFID Desktop setup and optimization tool connects and reads tags as expected, so the device and antenna are working (reading tags).
Could these issues be related to the ROSPEC file we are using (see below)?
{
"$schema": "https://llrpjs.github.io/schema/core/encoding/json/1.0/llrp-1x0.schema.json",
"id": 1,
"type": "ADD_ROSPEC",
"data": {
"ROSpec": {
"ROSpecID": 123,
"Priority": 1,
"CurrentState": "Disabled",
"ROBoundarySpec": {
"ROSpecStartTrigger": {
"ROSpecStartTriggerType": "Immediate"
},
"ROSpecStopTrigger": {
"ROSpecStopTriggerType": "Null",
"DurationTriggerValue": 0
}
},
"AISpec": {
"AntennaIDs": [1, 2, 3, 4],
"AISpecStopTrigger": {
"AISpecStopTriggerType": "Null",
"DurationTrigger": 0
},
"InventoryParameterSpec": {
"InventoryParameterSpecID": 1234,
"ProtocolID": "EPCGlobalClass1Gen2"
}
},
"ROReportSpec": {
"ROReportTrigger": "Upon_N_Tags_Or_End_Of_ROSpec",
"N": 1,
"TagReportContentSelector": {
"EnableROSpecID": true,
"EnableAntennaID": true,
"EnableFirstSeenTimestamp": true,
"EnableLastSeenTimestamp": true,
"EnableSpecIndex": false,
"EnableInventoryParameterSpecID": false,
"EnableChannelIndex": false,
"EnablePeakRSSI": false,
"EnableTagSeenCount": true,
"EnableAccessSpecID": false
}
}
}
}
}
For anyone having a similar issue, we found that attempting to configure more antennas than the Zebra device has connected caused the entire spec to fail. In our case, we had two antennas connected, so including antennas 3 and 4 in the spec was causing the problem.
See below for the working ROSPEC. The extra antennas in the data.AISpec.AntennaIDs property were removed and allowed our application to connect and read tags.
We are still having some issues with llrpjs when trying to STOP_ROSPEC because it sends an RO_ACCESS_REPORT response without a resName value. See the issue on GitHub for more information.
That said, our application works without sending the STOP_ROSPEC command.
{
"$schema": "https://llrpjs.github.io/schema/core/encoding/json/1.0/llrp-1x0.schema.json",
"id": 1,
"type": "ADD_ROSPEC",
"data": {
"ROSpec": {
"ROSpecID": 123,
"Priority": 1,
"CurrentState": "Disabled",
"ROBoundarySpec": {
"ROSpecStartTrigger": {
"ROSpecStartTriggerType": "Null"
},
"ROSpecStopTrigger": {
"ROSpecStopTriggerType": "Null",
"DurationTriggerValue": 0
}
},
"AISpec": {
"AntennaIDs": [1, 2],
"AISpecStopTrigger": {
"AISpecStopTriggerType": "Null",
"DurationTrigger": 0
},
"InventoryParameterSpec": {
"InventoryParameterSpecID": 1234,
"ProtocolID": "EPCGlobalClass1Gen2",
"AntennaConfiguration": {
"AntennaID": 1,
"RFReceiver": {
"ReceiverSensitivity": 0
},
"RFTransmitter": {
"HopTableID": 1,
"ChannelIndex": 1,
"TransmitPower": 170
},
"C1G2InventoryCommand": {
"TagInventoryStateAware": false,
"C1G2RFControl": {
"ModeIndex": 23,
"Tari": 0
},
"C1G2SingulationControl": {
"Session": 1,
"TagPopulation": 32,
"TagTransitTime": 0,
"C1G2TagInventoryStateAwareSingulationAction": {
"I": "State_A",
"S": "SL"
}
}
}
}
}
},
"ROReportSpec": {
"ROReportTrigger": "Upon_N_Tags_Or_End_Of_AISpec",
"N": 1,
"TagReportContentSelector": {
"EnableROSpecID": true,
"EnableAntennaID": true,
"EnableFirstSeenTimestamp": true,
"EnableLastSeenTimestamp": true,
"EnableTagSeenCount": true,
"EnableSpecIndex": false,
"EnableInventoryParameterSpecID": false,
"EnableChannelIndex": false,
"EnablePeakRSSI": false,
"EnableAccessSpecID": false
}
}
}
}
}

How do you configure a mados scCall step for VarArgs MultiArg endpoint argument with a struct as argument?

I'm trying to create an elrond smart contract that would allow multiple elements to be sent at once to reduce the number of transactions to send the initial information to the contract.
To do so, I'm using an endpoint that takes into an argument a VarArgs of MultiArg3
#[allow(clippy::too_many_arguments)]
#[only_owner]
#[endpoint(createMultipleNft)]
fn create_multipl_nft(
&self,
#[var_args] args: VarArgs<MultiArg3<ManagedBuffer, ManagedBuffer, AttributesStruct<Self::Api>>>,
) ->SCResult<u64> {
...
Ok(0u64)
}
And here is my AttributesStruct
#[derive(TypeAbi, NestedEncode, NestedDecode, TopEncode, TopDecode)]
pub struct AttributesStruct<M: ManagedTypeApi> {
pub value1: ManagedBuffer<M>,
pub value2: ManagedBuffer<M>,
}
And here is my Mandos step (the rest of the steps works fine, they were all working with my previous implementation for a single element endpoint).
{
"step": "scCall",
"txId": "create-multiple-NFT-1",
"tx": {
"from": "address:owner",
"to": "sc:minter",
"function": "createMultipleNft",
"arguments": [
["str:NFT 1"],
["str:www.mycoolnft.com/nft1.jpg"],
[
["str:test1", "str:test2"]
]
],
"gasLimit": "20,000,000",
"gasPrice": "0"
},
"expect": {
"out": [
"1", "1", "1"
],
"status": "0",
"message": "",
"gas": "*",
"refund": "*"
}
}
I have also try this for the arguments :
"arguments": [
["str:NFT 1",
"str:www.mycoolnft.com/nft1.jpg",
["str:test1", "str:test2"]
]
And this :
"arguments": [
["str:NFT 1",
"str:www.mycoolnft.com/nft1.jpg",
"str:test1", "str:test2"
]
And this :
"arguments": [
["str:NFT 1",
"str:www.mycoolnft.com/nft1.jpg",
{
"0-value1":"str:test1",
"1-value2":"str:test2"
}
]
Here is the error message :
FAIL: result code mismatch. Tx create-multiple-NFT-1. Want: 0. Have: 4 (user error). Message: argument decode error (args): input too short
At the same time, I'm having some problems with the argument input of the struct with the ManagedBuffer. Am I doing something wrong with that? I'm trying to have a struct of argument for an NFT that contains multiple entries of strings that I can send as an argument to the smart contract.
Since you are using a struct the ManagedBuffer inside the struc are nested encoded. Which means you need to add the length of the ManagedBuffer before it.
Luckily there is a shortcut for that by using the nested: prefix.
So your arguments would look like this:
"arguments": [
["str:NFT 1"],
["str:www.mycoolnft.com/nft1.jpg"],
[
["nested:str:test1", "nested:str:test2"]
]
]

Rust generates calls to __truncdfsf2, which it claims not to support?

I'm writing some Rust code for an Atmega328. I understand that it does not have built in support for floats, so I will need to provide some sort of soft implementation of floating point routines that I want to use, like those found in compiler-builtins. However, even after including compiler-builtins, I get the following:
quaternion.rs:54: undefined reference to `__truncdfsf2'
Looking at the page for compiler-builtins, I see that there are no plans to support __truncdfsf2, since apparently it "involves floating-point types ("f128", "f80" and complex numbers) that are not supported by Rust."
Can anyone help me understand why Rust/LLVM seems to be generating calls that Rust apparently doesn't support? And is there a way to go about solving this?
Here's my target.json for reference.
{
"arch": "avr",
"atomic-cas": false,
"cpu": "atmega328",
"data-layout": "e-P1-p:16:8-i8:8-i16:8-i32:8-i64:8-f32:8-f64:8-n8-a:8",
"eh-frame-header": false,
"exe-suffix": ".elf",
"executables": true,
"late-link-args": {
"gcc": [
"-lgcc"
]
},
"linker": "avr-gcc",
"linker-is-gnu": true,
"llvm-target": "avr-unknown-unknown",
"max-atomic-width": 8,
"no-default-libraries": false,
"no-compiler-rt": true,
"target-c-int-width": "16",
"target-pointer-width": "16",
"pre-link-args": {
"gcc": [
"-mmcu=atmega328",
"-Wl,--as-needed"
]
},
"vendor": "unknown",
"os": "none",
"target-endian": "little"
}

The requested default allocation is not currently assigned to this server. Pterodactyl panel

I have my own API and want to fix server edit also update the build configuration, but as I want to test it it says sth like allocation field needed, then I did go to https://dashflo.net/docs/api/pterodactyl/v1/#req_11fc764c3ed648ca8e6d60bff860ca6d to read further and in their example they used "allocation: 1", also I did that and then it says "The requested default allocation is not currently assigned to this server.", "status: 400", "code: 'DisplayException'" and I don't know how to fix it I tryied allocation: "2" or "0" but then it says invalid...
This is my Request:
{
"id": id,
"allocation": "1",
'memory': RAM,
'swap': "0",
'disk': Disk,
'io': IO,
'cpu': CPU,
"threads": null,
'feature_limits': {
'databases': AmountOfDatabases,
'allocations': AmountOfAllocations,
"backups": Backups
},
}
I am using axios to send the PATCH request.
I know the error I need to change my create server request from
{
'name': NameOfServer,
'user': OwnerID,
'description': 'A Nodeactyl server',
'egg': EggID,
'pack': NestID,
'docker_image': DockerImage,
'startup': StartupCmd,
'limits': {
'memory': RAM,
'swap': Swap,
'disk': Disk,
'io': IO,
'cpu': CPU,
},
'feature_limits': {
'databases': AmountOfDatabases,
'allocations': AmountOfAllocations,
'backups': backups
},
'environment': {
'DL_VERSION': Version,
'SERVER_JARFILE': 'server.jar',
'VANILLA_VERSION': Version,
'BUNGEE_VERSION': Version,
'PAPER_VERSION': Version,
'MC_VERSION': Version,
'BUILD_NUMBER': Version,
'INSTALL_REPO': Version,
"BOT_JS_FILE": "index.js",
"AUTO_UPDATE": true,
"USER_UPLOAD": true
},
'allocation': {
'default': 1,
'additional': [],
},
'deploy': {
'locations': [1],
'dedicated_ip': false,
'port_range': [],
},
'start_on_completion': true,
'skip_scripts': false,
'oom_disabled': true
}
to:
{
'name': NameOfServer,
'user': OwnerID,
'description': 'A Nodeactyl-v1-support server',
'egg': EggID,
'pack': NestID,
'docker_image': DockerImage,
'startup': StartupCmd,
'limits': {
'memory': RAM,
'swap': Swap,
'disk': Disk,
'io': IO,
'cpu': CPU,
},
'feature_limits': {
'databases': AmountOfDatabases,
'allocations': AmountOfAllocations,
'backups': backups
},
'environment': {
'DL_VERSION': Version,
'SERVER_JARFILE': 'server.jar',
'VANILLA_VERSION': Version,
'BUNGEE_VERSION': Version,
'PAPER_VERSION': Version,
'MC_VERSION': Version,
'BUILD_NUMBER': Version,
'INSTALL_REPO': Version,
"BOT_JS_FILE": "index.js",
"AUTO_UPDATE": true,
"USER_UPLOAD": true
},
"allocation": {
"default": 1
},
'start_on_completion': true,
'skip_scripts': false,
'oom_disabled': true
}

Couchdb 2 _find query not using index

I'm struggling with something that should be easy but it's making no sense to me, I have these 2 documents in a database:
{ "name": "foo", "type": "typeA" },
{ "name": "bar", "type": "typeB" }
And I'm posting this to _find:
{
"selector": {
"type": "typeA"
},
"sort": ["name"]
}
Which works as expected but I get a warning that there's no matching index, so I've tried posting various combinations of the following to _index which makes no difference:
{
"index": {
"fields": ["type"]
}
}
{
"index": {
"fields": ["name"]
}
}
{
"index": {
"fields": ["name", "type"]
}
}
If I remove the sort by name and only index the type it works fine except it's not sorted, is this a limitation with couchdbs' mango implementation or am I missing something?
Using a view and map function works fine but I'm curious what mango is/isn't doing here.
With just the type index, I think it will normally be almost as efficient unless you have many documents of each type (as it has to do the sorting stage in memory.)
But since fields are ordered, it would be necessary to do:
{
"index": {
"fields": ["type", "name"]
}
}
to have a contiguous slice of this index for each type that is already ordered by name. But the query planner may not determine that this index applies.
As an example, the current pouchdb-find (which should be similar) needs the more complicated but equivalent query:
{
selector: {type: 'typeA', name: {$gte: null} },
sort: ['type','name']
}
to choose this index and build a plan that doesn't resort to building in memory for any step.

Resources