Which 3GPP spec identifies the maximim number of PDP Contexts a UE can activate? - umts

So I know that the network limit on the max number of contexts a UE can activate is 11. However, I can't find in the 3GPP specs where this is stated explicitly. I've been searching for hours now and can't find anything. Can anyone point me in the right direction?

Okay, I think I've figured it out. It turns out I needed a little bit more time to continue digging. The limit is specified indirectly through the number of available NSAPIs that can be assigned. The number of NSAPIs is limited by the definition of the NSAPI IE specified in TS 24.008 (I'm using Rel4 for this, the situation may have changes in later releases) section 10.5.6.2. The definition allocates 4 bits to the NSAPI which encodes 11 NSAPIs and 5 reserved values. Since each PDP context needs to be allocated a specific NSAPI then there can only be a max of 11 PDP contexts because there is only 11 NSAPIs available.

Related

Calculation dynamic delay in AnyLogic

Good day!
Please, help me understand how the Delay block works in AnyLogic. Suppose we deal with a multichannel transmission network.
The model has 2 sources. Suppose these sources generate packets every 1 sec. Packets from different sources have different priorities and need different quantities of resources to be served (it is set up with Priority and Resource_quantity parameters respectively). The Priority_queue in the model is priority-based. The proposed model put the packets into the channels in accordance with Resource availability in the channel. Firstly, it tries to put the packet to the first channel. If there are no available resources it puts the packet into the second channel. If there are no resources in both channels it waits (it is realized with Hold block).
I noticed that if I set delays in blocks delay1 and delay2 with static parameters (for ex. 2 sec) the model works ok. But then I try to calculate it before these blocks the model doesn't take into consideration it at all. And in this case, the model works without any delays.
What did I do wrong here?
I will appreciate any help.
The delay is calculated in Exit block and is written into the variable delay of the agent. I tried to add traceln(agent.delay) as #Jaco-Ben suggested right after calculation of the delay and it showed zero. In this case it doesn't also seize resources :(
Thank #Jaco-Ben for the useful comments.
The delay is zero because
the result of division in Java depends on the types of the operands.
If both operands are integer, the result will be integer as well. To
let Java perform real division (and get a real number as a result) at
least one of the operands must be of real type.
So it was my problem.
To solve it I assigned double to one of the operands :
agent.delay = (double)agent.Resource_quantity/ChannelResources1.idle();
However, it is strange why it shows correct values in the database.

What is the reason of Corrupted fields problem in SPLUNK?

I have a problem on this search below for last 25 days:
index=syslog Reason="Interface physical link is down" OR Reason="Interface physical link is up" NOT mainIfname="Vlanif*" "nw_ra_a98c_01.34_krtti"
Normally field7 values are like these ones:
Region field7 Date mainIfname Reason count
ASYA nw_ra_m02f_01.34pndkdv may 9 GigabitEthernet0/3/6 Interface physical link is up 3
ASYA nw_ra_m02f_01.34pldtwr may 9 GigabitEthernet0/3/24 Interface physical link is up 2
But recently they wee like this:
00:00:00.599 nw_ra_a98c_01.34_krtti
00:00:03.078 nw_ra_a98c_01.34_krtti
I think problem may be related to:
It started to happen after the disk free alarm. (-Cri- Swap reservation, bottleneck situation, current value: 95.00% exceeds configured threshold: 90.00%. : 07:17 17/02/20)
Especially This is not about disk, it's about swap space, the application finishes memory and then goes to swap use. There was memory increase before, but obviously it was insufficient, it is switching to swap again.
I need to understand: ''Why they use so many resources?''
Problematic one:
Normal one:
You need to provide example events, one from the normal situation, and one from the problematic situation.
It appears that someone in your environment has developed a field extraction for field7, which is incorrectly parsing the event.
Alternatively, it could the device that is sending the syslog data, may have an issue with it and it is reporting an error. Depending on the device, you may be better using a TA from splunkbase.splunk.com to extract the relevant information from the event

What are the most recent bittorrent DHT implementation recommendations?

I'm working on implementing yet another bittorrent client and at this time struggling with DHT. It is implemented accordingly to this specification http://www.bittorrent.org/beps/bep_0005.html but starting debugging it I noticed that other nodes' responses on the network vary.
For example, find_node is supposed to return either target node info or 8 closest nodes. Most of the nodes reply with 34 closest nodes and usually only 1 - 3 nodes from those 34 successfully reply to the consequent ping request.
Is there another document with better implementation recommendation? May be it is already proved that using 15 minutes interval to change the nodes state to questionable is not efficient and I have to use 10 or other number? Where can I find the best up to date suggestions?
There is another strange thing. Bootstrap nodes like router.bittorrent.com reply with even more closest nodes and usually the "nodes" BDictionary property buffer length is not divisible to 6 (compact node info: 4 for IP and 2 for port). For now, I simply cut off the buffer at the closest divisible to 6 length but all that is strange. Does anybody know why that might happen?
the spec says (emphasis mine):
When a node receives a find_node query, it should respond with a key "nodes" and value of a string containing the compact node info for [...]
Further down:
Contact information for nodes is encoded as a 26-byte string. Also known as "Compact node info" the 20-byte Node ID in network byte order has the compact IP-address/port info concatenated to the end.
Additionally you should read the original Kademlia paper since the bittorrent BEP builds on the concepts described therein and omits deeper explanations of those concepts.
You might also want to read for a few few extensions that are more or less de-facto standard for most implementations now http://libtorrent.org/dht_extensions.html
And read the other DHT-related BEPs, some are fairly widely adopted and modify/clarify BEP-5-specified behavior, but generally in a backward-compatible way.
For example, find_node is supposed to return either target node info or 8 closest nodes
Nodes will return a variable amount of entries. Could be more than 8. Or fewer.

2 Questions about Risc-V-Privileged-Spec-v1.7

Page 16, Table 3.1:
Base field in mcpuid: RV32I RV32E RV64I RV128I
What is "RV32E"?
Is there a "E" extension?
ECALL (page 30) says nothing about the behavior of the pc.
While mepc (page 28) and mbadaddr (page 29) claim that "mepc will point to the beginning of the instruction". I think ECALL should set the mepc to the end of the causing instruction so that a ERET would go to the next instruction. Is that right?
As answered by CliffordVienna, RV32E ("embedded") is a new base ISA which uses 16 registers and makes some of the counter registers optional.
I would not recommend implementing a RV32E core, as it is probably an unnecessary over-optimization in core size that limits your ability to use a large body of RV*I code. But if performance is not needed, and you really need the core to be a tad smaller, and the core is not connected to a memory hierarchy that would dominate the area/power anyways, and you were willing to deal with the tool-chain headaches... then maybe an RV32E core is appropriate.
ECALL is treated like an exception, and will redirect the PC to the appropriate trap handler based on the current privilege level. MEPC will be set to the current PC of the ecall instruction.
You can verify this behavior by analyzing the Berkeley RV64G Rocket processor (https://github.com/ucb-bar/rocket/blob/master/src/main/scala/csr.scala), or by looking at the Spike ISA simulator (starting here: https://github.com/riscv/riscv-isa-sim/blob/master/riscv/insns/scall.h). Careful: as of 2015 Jun 27 the code is still in flux regarding the Privileged Spec.
If we look at how Spike handles eret ("sret": https://github.com/riscv/riscv-isa-sim/blob/master/riscv/insns/sret.h) for example, we have to be a bit careful. The PC is set to "mepc", but it's the trap handler's job to advance the PC by 4. We can see that done, for example, by the proxy kernel in some of the handler functions here (https://github.com/riscv/riscv-pk/blob/master/pk/handlers.c).
A draft of the RV32E (embedded) spec can be found here (via isa-dev mailing list):
https://lists.riscv.org/lists/arc/isa-dev/2015-06/msg00022/rv32e.pdf
It's RV32I with 16 instead of 32 registers and without the counter instructions.

explain me a difference of how MRTG measures incoming data

Everyone knows that MRTG needs at least one value to be passed on it's input.
In per-target options MRTG has 'gauge', 'absolute' and default (with no options) behavior of 'what to do with incoming data'. Or, how to count it.
Lets look at the elementary, yet popular example :
We pass cumulative data from network interface statistics of 'how much packets were recieved by the interface'.
We take it from '/proc/net/dev' or look at 'ifconfig' output for certain network interface. The number of recieved bytes is increasing every time. Its cumulative.
So as i can imagine there could be two types of possible statistics:
1. How fast this value changes upon the time interval. In oher words - activity.
2. Simple, as-is growing graphic that just draw every new value per every minute (or any other time interwal)
First graphic will be saltatory (activity). Second will just grow up every time.
I read twice rrdtool's and MRTG's docs and can't understand which option mentioned above counts what.
I suppose (i am not sure) that 'gauge' draw values as is, without any differentiation calculations (good for measuring how much memory or cpu is used every 5 minutes). And default or 'absolute' behavior tryes to calculate the speed between nearby measures, but what's the differencr between last two?
Can you, guys, explain in a simple manner which behavior stands after which option of three options possible?
Thanks in advance.
MRTG assumes that everything is being measured as a rate (even if it isnt a rate)
Type 'gauge' assumes that you have already calculated the rate; thus, the provided value is stored as-is (after Data Normalisation). This is appropriate for things like CPU usage.
Type 'absolute' assumes the value passed is the count since the last update. Thus, the value is divided by the number of seconds since the last update to get a rate in thingies per second. This is rarely used, and only for certain unusual data sources that reset their value on being read - eg, a script that counts the number of lines in a log file, then truncates the log file.
Type 'counter' (the default) assumes the value passed is a constantly growing count, possibly that wraps around at 16 or 64 bits. The difference between the value and its previous value is divided by the number of seconds since the last update to get a rate in thingies per second. If it sees the value decrease, it will assume a counter wraparound at 16 or 64 bit. This is appropriate for something like network traffic counters, which is why it is the default behaviour (MRTG was originally written for network traffic graphs)
Type 'derive' is like 'counter', but will allow the counter to decrease (resulting in a negative rate). This is not possible directly in MRTG but you can manually create the necessary RRD if you want.
All types subsequently perform Data Normalisation to adjust the timestamp to a multiple of the Interval. This will be more noticeable for Gauge types where the value is small than for counter types where the value is large.
For information on this, see Alex van der Bogaerdt's excellent tutorial

Resources