vlookup function excel cut part of string - excel

Guys i have a little problem. I have 2 csv files, i want to copy some data from one csv to another where id is the same. For this i use vlookupfunction but something is not good.
The orginal string in orginal csv is:
48 Port Managed Layer 3 Gigabit Ethernet Switch with optional 10GigE uplink and 802.3af and Legacy Power over Ethernet. Includes 48 - Copper Gigabit (1000Base-T) access ports and 2 - High Speed Expansion Slots. Provides up to 370 watts of 802.3af compliant power. Features include 802.1Q VLANs, GVRP, 802.1p QoS, 802.1w Rapid Spanning Tree, 802.3ad Link Aggregation, Auto MDI/MDI-X, CLI, HTTP GUI, SSH, SSL, RADIUS, SNMP. 19" Rackmount 1U housing. Includes AC PoE power supply. Supported expansion modules: Dual Stacking XIM (4700470F1, 4700470F2, 4700470F5), Dual SFP XIM (1700473F1), Dual SFP+ XIM (1700471F1).
And when i use this function
=IFERROR(VLOOKUP($A2,osnova.csv!$B$2:$AD$1660,8,0),IF(G2="","",G2))
I get this string:
48 Port Managed Layer 3 Gigabit Ethernet Switch with optional 10GigE uplink and 802.3af and Legacy Power over Ethernet. Includes 48 - Copper Gigabit (1000Base-T) access ports and 2 - High Speed Expansion Slots. Provides up to 370 watts of 802.3af compliant power. Features include 802.1Q VLANs, GVRP, 802.1p QoS, 802.1w Rapid Spanning Tree, 802. 19" Rackmount 1U housing. Includes AC PoE power supply. Supported expansion modules: Dual Stacking XIM (4700470F1, 4700470F2, 4700470F5), Dual SFP XIM (1700473F1), Dual SFP+ XIM (1700471F1).
The difference is that i have in orginal string this part and in the copied version i lose that part:
.3ad Link Aggregation, Auto MDI/MDI-X, CLI, HTTP GUI, SSH, SSL, RADIUS, SNMP.
Can someone help me with this? Did i do something wrong in my function?

Your version of Excel must be hitting the character limit of VLOOKUP. You should not be using this function in the first place, it's broken and it sucks. Consider using much superior INDEX/MATCH combination =index(osnova.csv!$H$1660, match($A2,osnova.csv!$B$2,0)).

Related

Wifi station Bandwidth 160MHz / 80MHz control (linux)

I observe wifi station TX bandwidth to be reduced from 160MHz to 80MHz while the station is farther away versus it is closer to AP. I'm using "iw wlan0 station dump" command to check that. AP is forced to 160MHz and it actually use 160MHz for downlink for both cases. But the AX200 station is using 80MHz for uplink after the RSSI is lower than say about -60dBm.
I've checked this with Intel AX200 card. To confirm this is not a card related I also checked Broadcom Xeon 1200 card. Same here. Also a number of different AP was tested. All results are consistent.
Since Intel AX200 uses Intel proprietary Rate Control Algorithm "iwl-mvm-rs" and Broadcom use some other, I know the bandwidth limitation must be introduced by linux itself (mac80211 / cfg80211?). Which part it could be? Can I fix it to 160MHz?
This bandwidth reduction is probably the part of Rate Control Algorithm but the strange thing is that AP downlink bitrate is for example 500Mbits/s (160MHz) while in the same time uplink is 250Mbits/s (80MHz). On the closer locations the bitrate is the same e.g. 1000Mbits/s (160MHz) for both downlink and uplink. Thus this might be some kind of a bug to reduce the bandwidth too early.

how to find out which ioports be assigned to my devices

has linux reserved io port numbers for all manufactured devices.
I have devices like intel built-in network card. or another device I have for wifi (usb) from realtek.
On linux repository on github, device drivers use specific io ports to register. And kernel assign those ports to device driver. device drivers normally request for ports using call to request_region function. so for some ethernet device it requests like following
for (id_port = 0x110 ; id_port < 0x200; id_port += 0x10)
{
if (!request_region(id_port, 1, "3c509-control"))
continue;
outb(0x00, id_port);
outb(0xff, id_port);
if (inb(id_port) & 0x01)
break;
else
release_region(id_port, 1);
}
above starts with 0x110 to 0x200, any port can be assigned in this range by kernel to driver and appear in /proc/ioports file means driver is using that io port by the time of success return from request_region.
Question : So my question is has linux assigned io ports to all manufactured devices usable with kernel 5.7 or latest kernel version?
Question : What if I want to write device driver for any device. How can I find the io ports number range to request to. I dont not expect that I have to look into kernel code and find similer driver port range. so How can I find that io port number range. how to achieve this first step required in writing device driver (any device. be it wifi internet device or ethernet device)
Question : So my question is has linux assigned io ports to all manufactured devices usable with kernel 5.7 or latest kernel version?
No.
Question : What if I want to write device driver for any device. How can I find the io ports number range to request to.
You ask the user for it. After all it's the user who set them using jumpers on the ISA card.
Here's a picture of an old Sound Blaster card (taken from Wikipedia, I'm too lazy to rummage around in my basement now). I've highlighted a specific area in the picture:
That jumper header I highlighted: That's the port configuration jumper. As a user you literally connect two of the pins with a jumper connector and that connects a specific address line that comes from the card connectors to the circuitry on the rest of the card. This address line is part of the AT bus port I/O scheme. The user sets this jumper, writes down the number and then tells the driver, which number it was set to. That's how AT style I/O ports.
Or the driver uses one of the well known port numbers for specific hardware (like network controllers) that dates back to the era, where ISA style ports were still are thing. Also there's old ISA-P'n'P where the BIOS and the add-in cards would negotiate the port assignments at power up, before the OS even started. You can read those port numbers with the ISA-P'n'P API provided by the kernel.
We no longer use this kind of hardware in practice! Except for legacy and retro computing purposes.
Over a quarter of century ago, the old AT / ISA bus was superseeded with PCI. Today we use PCIe which, from the point of view of software still looks like PCI. One of the important things about PCI was, that it completely dropped the whole concept of ports.
With ISA what you had were 8 data lines and 16 address lines, plus two read/write enable lines, one for memory mapped I/O and one for port I/O. You can find the details here https://archive.is/3jjZj. But what happens when you're reading from say, port 0x0104, it would physically set the bit pattern of 0x0104 to the address lines on the ISA bus, pull low the read enable line, and then read the voltage level on the data lines. And all of that is implemented as an actual set of instructions of the x86: https://c9x.me/x86/html/file_module_x86_id_139.html
Now look at the PCI bus: There's no longer separate data and address lines. Instead read/write commands would be sent, and everything happens through memory mappings. PCI devices have something called a BAR: a Base Address Register. This is configured by the PCI root complex and assigns the hardware the region of actual physical bus addresses where it appears. The OS has to get those BAR information from the PCI root complex. The driver uses the PCI IDs to have the hardware discovered and the BAR information told to it. It can then do memory reads/writes to talk to the hardware. No I/O ports involved. And that is just the lowest level. USB and Ethernet happen a lot further up. USB is quite abstract, as is Ethernet.
Your other question Looking for driver developer datasheet of Intel(R) Core(TM) i5-2450M CPU # 2.50GHz suggests, that you have some serious misconceptions of what is actually going on. You were asking about USB devices, and Ethernet ports. Neither of those in any way directly interact with this part of the computer.
Your question per se is interesting. But we're also running into a massive XYZ problem here; it's worse than an XY problem; you're asking about X, although you want to solve Y. But Y isn't even the problem you're dealing with in the first place.
You're obviously smart, and curious, and I applaud that. But I have to tell you, that you've to backtrack quite a bit, to clear up some of the misconceptions you have.

Autosar Network Management SWS 4.2.2. - Partial networking

In Autosar NM 4.2.2 NM PDU Filter Algorithm,
What is significance of CanNmPnFilterMaskByte . I understood that it is used to Mask(AND) with incoming NM PDU with Partial Network info and decide to participate in communication or not. But please explain how exactly it works in brief.
You are actually talking about Partial Networking. So, if certain functional clusters are not needed anymore, they can go to sleep and save power.
ECUs supporting PN check all NmPdus for the PartialNetworkingInfo (PNI, where each bit represents a functional cluster status) in the NmPdus UserData.
The PnFilterMask actually filters out any irrelevant PNI info, the ECU is not interested in at all (because the ECU does not contribute in any way to these functions). If after applying the filter, everything is 0, the NmPdu is discarded, and does therefore not cause a restart of the Nm-Timeout Timer. Which brings actually the Nm into the Go-to-sleep phase, even though, NmPdus are still transmitted.
By ECU, also consider Gateways.
Update how to determine the mask
As described above, each bit represents a function.
Bit0 : Func0
..
Bit7: Func7
The OEM would now have to check, which ECUs in the Vehicle are necessary for which functions (also at certain state) required or not, and how to layout the vehicle networks.
Here are some function examples, and ECUs required excluding gateways:
ACC : 1 radar sensor front
EBA : 1 camera + 1..n radar sensor front
ParkDistanceControl (PDC): 4 Front- + 4 Rear Sensors + Visualization in Dashboard
Backup Camera: 1 Camera + Visualization ECU (the lines which tell according to steering angle / speed where the vehicle would move within the camera picture)
Blind Spot Detection (BSD) / LaneChangeAssist (LCA): 2 Radar sensors in the rear + MirrorLed Control and Buzzer Control ECU
Rear Cross Traffic Assist (RCTA) (w/ or w/o Brake + Alert): 2 Radar Sensors in the rear + MirrorLed Control and Buzzer Control ECU
Occupant Safe Exit (warn or keep doors closed in case something approaches): 2 rear radar sensors + DoorLock ECU(s)
The next thing is, that some functions are distributed over several ECUs.
e.g. the 2 rear radar sensors can do the whole BSD/LCA, RCTA, OSE functions, including maybe LED driver for the MirrorLEDs and a rear buzzer driver, or the send this information over CAN to a central ECU which handles the MirrorLEDs and a rear buzzer. (such short range radar sensors is, what I'm doing now for a long time, and the number of different functions grows over the years)
The camera can have some companion radar sensors (e.g. the one where ACC runs on or some short range radars) to help verify/classify image data / obejcts.
The PDC sensors are maybe also small ECUs giving out some information to a central PDC ECU, which actually handles the output to the dashboard.
So, not all of them need to be activated all the time and pull on the battery.
BSD/LCA, RCTA/B need to work while driving or parking, RCTA/B only when reverse gear is selected, BSD/LCA only with forward gear or neutral, PDC only when parking (low speed forward/reverse), Backup Camera only when reverse gear is in for parking, OSE can be active while standstill, with engine on (e.g. drop off passenger at traffic light) or without engine on (driver leaves and locks vehicle).
Now, for each of these cases, you need to know:
which ECUs are still required for each vehicle state and functional state
the network topology telling you, how these ECUs are connected.
You need to consider gateway ECUs here, since they have to route certain information between multiple networks.
You would assign 1 bit of the Nm Flags per function or function cluster (e.g. BSD/LCA / RCTA = 1bit, OSE = 1bit, BackupCam / PDC (e.g. "Parking mode") = 1bit
e.g. CanNmPnInfo Flags might be defined as:
Bit0 : PowerTrain
Bit1 : Navi/Dashboard Cluster
Bit2 : BSD/LCA/RCTA
Bit3 : ParkingMode
Bit4 : OSE
...
Bit7 : SmartKeyAutomaticBackDoor (DoorLock with key in near to detect swipe/motion to automatically backdoor)
It may also be possible to have CL15 devices without PNI, because the functions are only active while engine is on like ACC, EBA, TrafficJamAssist ... (even BSD/LCA/RCTA could be considered like that). You could handle them maybe without CL30 + PNI.
So, you now have an assignment of function to a bit in the PNI, and you know which ECUs are required.
e.g. the radar sensors in the rear need 0x34 (Bits 2,3,4), even though, they need to be aware of, that some ECUs might not deliver infos anymore, since they are off (e.g. Speed, SteeringAngle on Powertrain turned off after CL15 off -> OSE) and this is not an error (CAN Message Timeouts).
The gateway might need some more bits in the mask, in order to keep subnetworks alive, or to actually wake up some networks and their ECUs (e.g. Remote Key waking up DoorLock ECUs)
So a gateway in the rear might have 0xFC as a mask, but a front gateway 0x03.
The backup camera might be only activated in low-speed (<20km/h) and reverse gear, to power it up but PDCs can work without reverse gear.
The PNI flags are actually usually define by the OEM, because it is a vehicle level architectural item. This can not be defined usually by a supplier.
It should be actually part of the AUTOSAR ARXML SystemDescription. (see AUTOSAR_TPS_SystemTemplate.pdf)
EcuInstance --> CanCommunicationConnector (pnc* Attributes)
Usually, the AUTOSAR Configuration Tools should support to automatically extract this information to configure CanNm / Nm and ComM (User Requests) automatically.
Sorry for the delay, but finding a example to describe it can be quite tedious,
But I hope it helps.

Transmitting odd number of bits serially

I'm implementing a LIN protocol on a Linux SBC that transmits over a UART. I don't have time to develop a complete LIN stack, so I'm just implementing a frame structure for messages as defined by the protocol. The problem is that the protocol requires a "Break" field which makes the slave devices on the bus listen. This field consists of zeros for 13 bit-times. Any ideas how to send zeros 13 bit-times over UART, when serial data transmission requires complete bytes?
Per Wiki:
LIN (Local Interconnect Network) is a serial network protocol used for
communication between components in vehicles. The need for a cheap
serial network arose as the technologies and the facilities
implemented in the car grew, while the CAN bus was too expensive to
implement for every component in the car. European car manufacturers
started using different serial communication topologies, which led to
compatibility problems.
If you would have paid attention at class you would have known that:
Data is transferred across the bus in fixed form messages of
selectable lengths. The master task transmits a header that consists
of a break signal followed by synchronization and identifier fields.
The slaves respond with a data frame that consists of between 2, 4 and
8 data bytes plus 3 bytes of control information.
You should just send an echo of 0x0000 following by CR/LF.

Bluetooth SPP throughput

I am trying to figure out what the maximum throughput of a Bluetooth 2.1 SPP connection is.
I found 2 publications concerned with the topic (1, 2) and they both show diagrams, which show the throughput as a function of the Signal to noise ratio (that I can assume to be perfect for my concideration) and the kind of ACL package used. My problem is, I have no Idea which ACL packets are used. How is this decision made? Is it made on the fly, like "what's needed to transfer the current data is used"?
Furthermore, in the Serial Port Profile specification (chapter 2.3) I found this sentence:
This profile requires support for one-slot packets only. This means that this profile
ensures that data rates up to 128 kbps can be used. Support for higher rates is optional.
The last sentence realy confuses me. How do I find out whether this "option" applies in my case?
This means that in SPP mode, all bluetooth modules should work up to 128kbps, and some modules may work even faster.
Under SPP is RFCOMM, which tries to deliver the packets as quickly as possible. If only one packet is sent in one timeslot, you get the 128kbps. The firmware of the bluetooth module, or the HCI driver however can do things differently.
There are SPP speeds up to 480kbps reported - however this requires that both SPP modules are from the same vendor (e.g. BlueGiga iWrap modules can do this speed).
On the other end, if you're connecting to an unknown device, for example a BT112, or an RN41 module to an Android device, the actual usable SPP bandwidth can be much lower than 128 kbps (I have measurements as low as 10kbps).
In case of some old generation iPhones, the usable SPP bandwidth is around 8 kbps.
It is a wise idea to treat "standards" and "datasheets" very conservative, and do your own measurements if actual net data bandwidth is critical.
Even though BT, BT+EDR has theoretical on-the-air bitrates of 3Mbps, the actual usable net data bandwidth is a way smaller.

Resources