Really basic Q. I come from largely working on Windows so, the vagueries of linux are still awaiting discovery by me. I have a centos 7 box, seems it has rsyslog. How do I send messages from a C++ program to syslog (or, rsyslog)? I have tried googling "linux log to syslog -logger -script" and it does not give me one single HOWTO reference. I get lots of "how to configure syslog to capture logs and send them to various files" type hits, but none to show how to use the logging mechanism itself. Looking for HOWTO's or illustrative code samples. Thanks!
See the syslog(3) man page for standard C Library functions and examples.
Related
2.5 months ago, I was running a website on a Linux server to do a user study on 3 variations of a tool. All 3 variations ran on the same website. While I was conducting my user study, the website (i.e., process hosting the website) crashed. In my sleep-deprived state, I unfortunately did not record when the crash happened. However, I now need to know a) when the crash happened, and b) for how long the website was down until I brought it back up. I only have a rough timeframe for when the crash happened and for long it was down, but I need to pinpoint this information as precisely as possible to do some time-on-task analyses with my user study data.
The server runs Linux 16.04.4 LTS (GNU/Linux 4.4.0-165-generic x86_64) and has been minimally set up to run our website. As such, it is unlikely that any utilities aside from those that came with the OS have been installed. Similarly, no additional setup has likely been done. For example, I tried looking at a history of commands used in hopes that HISTTIMEFORMAT was previously set so that I could see timestamps. This ended up not being the case; while I can now see timestamps for commands, setting HISTTIMEFORMAT is not retroactive, meaning I can't get accurate timestamps for the commands I ran 2.5 months ago. That all being said, if you have an idea that you think might work, I'm willing to try (as long as it doesn't break our server)!
It is also worth mentioning that I currently do not know if it's possible to see a remote desktop or something of the like; I've been just ssh'ing in and use the terminal to interact with the server.
I've been bouncing ideas off with friends and colleagues, and we all feel that there must be SOMETHING we could use to pinpoint when the server went down (e.g., network activity logs showing spikes around the time that the user study began as well as when the website was revived, a log of previous/no longer running processes, etc.). Unfortunately, none of us know about Linux logs or commands to really dig deep into this very specific issue.
In summary:
I need a timestamp for either when the website crashed or when it was revived. It would be nice to have both (or otherwise determine for how long the website was down for), but this is not completely necessary
I'm guessing only a "native" Linux command will be useful since nothing new/special has been installed on our server. Otherwise, any additional command/tool/utility will have to be retroactive.
It may or may not be possible to get a remote desktop working with the server (e.g., to use some tool that has a GUI you interact with to help get some information)
Myself and my colleagues have that sense of "there must be SOMETHING we could use" between various logs or system information, such at network activity, process start times, etc., but none of us know enough about Linux to do deep digging without some help
Any ideas for what I can try to help figure out at least when the website crashed (if not also for how long it was down)?
A friend of mine pointed me to the journalctl command, which apparently maintains timestamps of past commands separately from HISTTIMEFORMAT and keeps logs that for me went as far back as October 7. It contained enough information for me to determine both when I revived my Node js server as well as when my Node js server initially went down
I have to develop an application in C++ to monitor the state of processes in my Linux system and also need to know if a new process is created or an existing process is terminated. Is there an API available for this? Also it will be helpful if someone could tell me how to start it with.
inotify works well for all directories I tried, except the proc filesystem. So again I continued to search for a solution and where I reached was the proc connector and socket filters. Not much documented, but really worthy. Just have a look at:
http://netsplit.com/the-proc-connector-and-socket-filters
The way to reach this conclusion was through the answer provided by David Crookes to
Detect launching of programs on Linux platform.
Hope it will help someone in future.
The main idea is i am going to implement, i think so it's quite new.Main idea is here i want to do traceroute scan from web.Suppose i request for traceroute instruction or by passing parameter from web and parameter passing to linux server should be executed and the result should be returned to the website.
Actually i been through the series of instruction regarding the CGI script and some of passing the parameter to linux machine.I have tried some of bash intstruction using REST feature but i am not getting any near to it.For example i have gone through these traceroute link
http://ping.eu/traceroute/
and i am not able to understand the mechanism how it works.
You can search about websockets. This is what is used in travis-ci.org to display the console.
I have never used this feature but I am very interrested in having some documentations or tutorials, so if you find something please let me know.
Information abound about syslog, but I can't find anything very concise for my interest.
I have a user-created bash script that should log various debug, info, and error messages. I'd like to use syslog. This in Ubuntu Server distribution.
I'm looking for a quick overview only.
I see many files in /etc/logrotate.d that don't get discussed in any man pages that confuse me.
Should I be logging as user? local0-7?
Do I need to do something to configure this before I use these in a logger command?
How should I define what logs get created? Or is this already done?
With those questions answered I should be able to glean the details from the man pages.
You want the logger(1) utility, available in the bsdutils package.
From the man page:
logger - a shell command interface to the syslog(3) system log module
There's nothing that's essential to configure, just pass the switches you want. E.g.
logger -p local3.info -t myprogram "What's up, doc?"
You can now inspect wherever local3.info messages go and you will see something like this:
Jul 11 12:46:35 hostname myprogram: What's up, doc?
You only need to worry about logrotate if you need something fancier than this.
As for what log facility to use, I would use daemon for daemon messages and local for most other things. You should consult syslog(3) for the purposes of the different facilities.
Don't worry about logrotate. It doesn't affect you if you're logging to the system log.
You can use any facility you like. See the syslogd configuration for what ends up where.
See the syslogd configuration for what ends up where.
See the... yeah, you get it.
I've been looking into different web statistics programs for my site, and one promising one is Visitors. Unfortunately, it's a C program and I don't know how to call it from the web server. I've tried using PHP's shell_exec, but my web host (NFSN) has PHP's safe mode on and it's giving me an error message.
Is there a way to execute the program within safe mode? If not, can it work with CGI? If so, how? (I've never used CGI before)
Visitors looks like a log analyzer and report generator. Its probably best setup as a chron job to create static HTML pages once a day or so.
If you don't have shell access to your hosting account, or some sort of control panel that lets you setup up chron jobs, you'll be out of luck.
Is there any reason not to just use Google Analytics? It's free, and you don't have to write it yourself. I use it, and it gives you a lot of information.
Sorry, I know it's not a "programming" answer ;)
I second the answer of Jonathan: this is a log analyzer, meaning that you must feed it as input the logfile of the webserver and it generates a summarization of it. Given that you are on a shared host, it is improbable that you can access to that file, and even if you would access it, it is probable that it contains then entries for all the websites hosted on the given machine (setting up separate logging for each VirtualHost is certainly possible with Apache, but I don't know if it is a common practice).
One possible workaround would be for you to write out a logfile from your pages. However this is rather difficult and can have a severe performance impact (you have to serialize the writes to the logfile for one, if you don't want to get garbage from time to time). All in all, I would suggest going with an online analytics service, like Google Analytics.
As fortune would have it I do have access to the log file for my site. I've been able to generate the HTML page on the server manually - I've just been looking for a way to get it to happen automatically. All I need is to execute a shell command and get the output to display as the page.
Sounds like a good job for an intern.
=)
Call your host and see if you can work out a deal for doing a shell execute.
I managed to solve this problem on my own. I put the following lines in a file named visitors.cgi:
#!/bin/sh
printf "Content-type: text/html\n\n"
exec visitors -A /home/logs/access_log