What does "No more variables left in this MIB View" mean (Linux)? - linux

On Ubuntu 12.04 I am tring to get the subtree of management values with the following command:
snmpwalk -v 2c -c public localhost
with the last line of the output being
iso.3.6.1.2.1.25.1.7.0 = No more variables left in this MIB View (It is past the end of the MIB tree)
Is this an error? A warning? Does the subtree end there?

There's a bit more going on here than you might suspect. I encounter this on every new Ubuntu box that I build, and I do consider it a problem (not an error, but a problem--more on this down further).
Here's the technically-correct explanation (why this is not an "error"):
"No more variables left in this MIB View" is not particularly an error; rather, it is a statement about your request. The request started at something simple, say ".1.3" and continued to ask for the "next" lexicographic OID. It got "next" OIDs until that last one, at which point the agent has informed you that there's nothing more to see; don't bother asking.
Now, here's why I consider it a problem (in the context of this question):
The point of installing "snmpd" and running it is to gather meaningful information about the box; typically, this information is performance-oriented. For example, the three general things that I need to know about are network-interface information (IF-MIB::ifHCInOctets and IF-MIB::ifHCOutOctets), disk information (UCD-SNMP-MIB::dskUsed and UCD-SNMP-MIB::dskTotal), and CPU information (UCD-SNMP-MIB::ssCpuRawIdle, UCD-SNMP-MIB::ssCpuRawWait, and so on).
The default Ubuntu "snmpd" configuration specifically denies just about everything useful with this configuration (limiting access to just enough information to tell you that the box is a Linux box):
view systemonly included .1.3.6.1.2.1.1
view systemonly included .1.3.6.1.2.1.25.1
rocommunity public default -V systemonly
This configuration locks the box down, which may be "safe" if it will be on an insecure network with little SNMP administration knowledge available.
However, the first thing that I do is remove the "-V systemonly" portion of the "rocommunity" setting; this will allow all available SNMP information to be accessed (read-only) via the community string of "public".
If you do that, then you'll probably see what you're expecting, which is pages and pages of SNMP information that you can use to gauge the performance of your box.

I know this thread is probably very old the I fix this is to use:
rocommunity public
and that should fix the problem.

Briefly, this is not an error, actually, when you "walk up" all OIDs on your agent, it will shows your this line>
Sometimes, it won't show you this line, because the last OID is not on your agent(you have already walk up all OIDs on your agent, but not walk up all OIDs).

$ snmpwalk -v 2c -c public localhost NET-SNMP-EXTEND-MIB::nsExtendObjects
NET-SNMP-EXTEND-MIB::nsExtendObjects = No more variables left in this MIB View (It is past the end of the MIB tree)
Also you can get this error while you can trying to see executed scripts I fix that problem to add
view all included .1 80
line to snmpd.conf than restart service
Than you will see your output going to change for both input

Related

Memory Monitor using ':C XXXX' in RDi not Showing Variable Value

When monitoring memory using the :C XXXX option, the memory values do not show when debugging. You can monitor up to 4000 characters by using the :C option when monitoring memory:
I have ran into this problem twice now. I am using IBM Rational Developer for i Version: 9.6.0.0, with the Java JDK/JRE v8u45.
Here are the values when debugging, and my data structure definition:
dcl-ds dsSQL qualified inz;
fullStmt varchar( 9360 ) inz;
end-ds;
Once I click on the element, all I see is ``. There is nothing in the value but that, but you can clearly see that dsSQL.fullStmt is not empty. I use this option daily and 99.99% of the time it works fine. I have to restart a million times, reset RDi to start with -c and recompile the program over and over to get it to work right.
Anyone have any idea how to fix this? I would give you the 'Error Log' but it is constantly filled and nothing in there seems to point to that issue. When adding or looking at that variable during debug, no errors are thrown.
You probably want to update to the latest 9.6.0.6 release. They have fixed at least a few memory problems with the debugger between your release and the latest.
https://www-01.ibm.com/support/docview.wss?uid=swg27038481

How does DIG utility work in FreeBSD and BIND?

I want to know how does the DIG (Domain Information Groper) command really works when it comes to code and implementation. I mean when we enter a DIG command, which part of the code in FreeBSD or BIND hits first.
Currently, I see that when I hit the DIG command, I see the control going to a file client.c. Inside this file, following function is called:
static void
client_request(isc_task_t *task, isc_event_t *event);
But how does the control reach to this place is still a big mystery for me even after digging a lot into 'named' part of the BIND code.
Further, I see this function being called from two places within this file. I tried to put logs into such places to know if control reaches to this place through those paths, but unfortunately that doesn't happen. It seems "Client_request()" function is somehow being called from outside somewhere that I am not able to figure out.
Is there anybody here who can help me out to resolve this mystery for me ?
Thanks.
Not only for bind but to any other command, within FreeBSD you could use ktrace, it is very verbose but could help you to get a quick overview of how the program is behaving.
For example, in latest FreeBSD's you have drill command instead of dig so if you would like to know what is happening behind scenes when you run the command, you could give a try to:
# ktrace drill freebsd.org
Then to disable tracing:
# ktrace -C
Once tracing is enabled on a process, trace data will be logged until
either the process exits or the trace point is cleared. A traced process
can generate enormous amounts of log data quickly; It is strongly
suggested that users memorize how to disable tracing before attempting to
trace a process.
After running ktrace drill freebsd.org a file ktrace.out should be created the one you could read with kdump, for example:
# kdump -f ktrace.out | less
That will hopefully "reveal the mystery", in your case, just replace drill with dig and then use something like:
# ktrace dig freebsd.org
Thanks to FreeBSD Ports system you can compile your own BIND with debugging enabled. To do so run
cd /usr/ports/dns/bind913/ && make install clean WITH_DEBUG=1
Then you can run it inside debugger (lldb /usr/local/bin/dig), break on the line you are interested in and then look at backtrace to figure out how the control reached there.

How to request NASMT Q700 QNAP linux hard disk smart states using the ssh interface?

I use a NASMT Q700 QNAP NAS. For remote monitoring purposes i want to read some values and save them into a database.
Since the web-interface is very complex and full of javascript, i can not scrape it. So I tried to connect to the NAS with SSH.
Which is great, because SSH is one of the methods, that i can connect with automatically with c# and I get back text that I can parse.
The installed Linux system on the box is a :
Linux NASMT 2.6.33.2 #1 Fri Mar 7 11:55:22 CST 2014 armv5tel unknown
I tried to reach my goal:
man is not installed.
smartctl is not installed. (Google told me to try this out)
I went into the /bin and /usr/bin directories and tried everything suspecious. There seems to be a program called nasutil installed. Only that it is not very self documenting. Various calls with different parameters did not work, i always get the same answer:
nasutil multi-call binary
[function] [arguments]...
Current defined functions:
init_nas_cache, init_admin_group, set_file_owner, chk_flash, reset_all, chk10198, get_trusted_domain, update_krb5_ticket
rescan_hd, check_e2key, burn_e2key, cnt_phy_nic, http_link, ip_filter, hdusb_copy, ims, qpkg, gen_upnp_desc, scanafpdb
eset_system, umount_all_vdd, sss_convert, httpd_init, get_hwsn, get_suid, setsum, getsum, rsyslog_util, radius_util, send_alert_mail, rsync_util
acl_cmd check_ldap clean_reset_pwd network_boot_rescan
I used google on this one but could not find anything useful.
I am looking for a command on this linux system without smartctl to give me a list of the installed hard drives with their SMART status.
Has anyone an idea?
Thank you very much in advance!
actually, I was able to find the answer using email and contacts at Fujitsu.
The answer was simple as can be:
# get_hd_smartinfo -d 1
1 is disk 1. Replace with 2 if want to check disk 2.
I did not test it yet, as soon as I have, i'll accept the answer for everyone to see.

wireshark coredumps during load

I have a wireshark dissector plugin.
I also have a wireshark installed from apt-get.
The wireshark loads fine without the plugin inserted in the right place.
When I include the plugin .so file and try to run wireshark, I get the following error:
$ wireshark
08:23:45 Err register_subtree_array: subtree item type (ett_...) not -1 ! This is a development error: Either the subtree item type has already been assigned or was not initialized to -1.
Trace/breakpoint trap (core dumped)
I tried understanding the problem. It says the subtree was already assigned (I'm assuming assigned an ett value) or was not initialized with -1. there are 3 files in my plugin where the API is called and I checked the values of ett[] being supplied to the API in each of these places. They are all initialized to -1.
Stuck in a roadblock. Any suggestion would be helpful.
Also, I do not understand where wireshark dumps the core. I could not find any core.
Any idea about this?
Generally, if you want to insert a plugin into a program, you have to ensure that the library API that the plugin was compiled against is the same as that provided by the program.
Unless wireshark provides documented versioning in its library API, this means that you have to have the plugin compiled against the same version of wireshark that you intend to use it with. So, if you compile your wireshark or the plugin yourself, you should compile the other as well. If you get your plugin in binary form, you should get your wireshark also from exactly the same place, otherwise you may not know if the two are compatible or not. If you only get a core dump when you insert the plugin, that's a strong indication that the two may not be compatible.
register_subtree_array: subtree item type (ett_...) not -1
...
there are 3 files in my plugin where the API is called and I checked the values of ett[] being supplied to the API in each of these places. They are all initialized to -1.
To which API are you referring? You must not call register_subtree_array() on any particular ett_ array more than once; if you're calling it twice, the first call will cause the ett_ values in the array to be set to values different from -1, so the next call will fail with that error.

About the /proc file system

I am using a command in the proc file system which is the following
echo 0 > /proc/sys/net/ipv4/ip_forward
Note: I don't want to know the basic of the command written above, I want what all happens when it goes inside the kernel. As, I want to implement one of the /proc file.
Now if I want to trace the code right from when the 0 is echoed in the file-system then how to go about it. I mean if I want to trace what happens when I do this.
I want to see where in the kernel code this 0 is accepted and in which value does it get stored inorder to make the changes. Please, can somebody tell what all happens when you call this command. I want in detail explain. I don't want the description of the command.
Any related article on how it changes the kernel parameters is also fine.
I have read this but, not explained there. http://www.linuxjournal.com/article/8381
Thanks
search through linux tree (especially network stack) for create_proc_entry function. Figure out what file creates ip_forward (it must be in ip4v drivers) from name passed to create_proc_entry.
When you find the file, look at where proc_dir_entry structure is created and what functions are assigned to its read_proc, write_proc members.

Resources