Network printer doesn't accept job from Debian Linux, no errors in error_log - linux

There is a shared printer at my workplace. We send jobs and then go to the printer and authenticate, so printer prints your documents only when you present at it. Periodically, we change domain passwords, so I also have to change it in /etc/cups/printers.conf (windows users just change domain password). So, that's how it works.
But, suddenly, it stop receive my jobs. When I send job I have no errors and have this:
sudo tail /var/log/cups/access_log
localhost - - [14/Apr/2015:12:15:14 +0300] "POST /printers/Generic-PCL-6-PCL-XL HTTP/1.1" 200 499 Create-Job successful-ok
localhost - - [14/Apr/2015:12:15:14 +0300] "POST /printers/Generic-PCL-6-PCL-XL HTTP/1.1" 200 1273674 Send-Document successful-ok
localhost - - [14/Apr/2015:12:17:59 +0300] "POST / HTTP/1.1" 200 183 Renew-Subscription successful-ok
On cups page in browser it shows state for job - "Pending since (date/time)".
It seems like job was sent successfully, but when I came to printer I've got nothing and no job in my queue. Our IT support fix problems only for Windows users and who on Linux - on their own. So, I don't know what to do and what logs I should inspect. Please, help.

Probably, some updates broke it down. But I have found another solution - I add printer not via samba, but via lp and it doesn't ask username/password:
cat /etc/cups/printers.conf
# Printer configuration file for CUPS v1.5.3
# Written by cupsd
# DO NOT EDIT THIS FILE WHEN CUPSD IS RUNNING
<DefaultPrinter KonicaMinolta>
UUID urn:uuid:0f60c08a-ecfb-326a-421c-86aa3519147b
Info MyCompany Office printer
Location WestCorridor
MakeModel Generic PostScript Printer Foomatic/Postscript (recommended)
DeviceURI lpd://Company_printer_server_address/lp
State Idle
StateTime 1429265417
Type 8433692
Accepting Yes
Shared Yes
JobSheets none none
QuotaPeriod 0
PageLimit 0
KLimit 0
OpPolicy default
ErrorPolicy stop-printer
</Printer>
If somebody can provide another solution or some explanation why it is so, I will be glad to see.

As far as debugging you can view more data in your CUPS logs if you edit your /etc/cups/cupsd.conf file, find the section "loglevel" change "info" to "debug"
Then you should restart CUPS with:
/etc/init.d/cups restart
Then your log will be in
/var/log/cups/error_log

Related

GitLab Health Check without token

I've got GitLab 10.5.6. I'd like to use Health Check information in my monitoring system. I can configure it by using Health Check endpoints with health check access token, but as this solution is depracated, I want to use IP whitelist. And I have some problems with it.
According to this article https://docs.gitlab.com/ee/administration/monitoring/ip_whitelist.html I edited /etc/gitlab/gitlab.rb and added this line (as this GitLab was installed around version 7 or even older I think):
gitlab_rails['monitoring_whitelist'] = ['127.0.0.0/8', '192.168.0.1', 'X.X.X.X', 'Y.Y.Y.Y']
where X.X.X.X is IP of my computer and Y.Y.Y.Y is IP of server with GitLab. After it I executed reconfiguration (gitlab-ctl reconfigure). And started tests... Below logs are from production.log file.
Execution of curl http://127.0.0.1:8888/-/readiness on server Y.Y.Y.Y returns proper JSON with expected data:
Started GET "/-/readiness" for 127.0.0.1 at 2018-03-24 20:01:31 +0100
Processing by HealthController#readiness as /
Completed 200 OK in 27ms (Views: 0.6ms | ActiveRecord: 0.5ms)
Execution of curl http://Y.Y.Y.Y:8888/-/readiness on server Y.Y.Y.Y returns error:
Started GET "/-/readiness" for Y.Y.Y.Y at 2018-03-24 21:20:04 +0100
Processing by HealthController#readiness as /
Filter chain halted as :validate_ip_whitelisted_or_valid_token! rendered or redirected
Completed 404 Not Found in 2ms (Views: 1.0ms | ActiveRecord: 0.0ms)
Accessing address http://Y.Y.Y.Y:8888/-/readiness through Firefox browser on computer X.X.X.X returns error:
Started GET "/-/readiness" for X.X.X.X at 2018-03-24 20:03:04 +0100
Processing by HealthController#readiness as HTML
Filter chain halted as :validate_ip_whitelisted_or_valid_token! rendered or redirected
Completed 404 Not Found in 2ms (Views: 0.8ms | ActiveRecord: 0.0ms)
Accessing address http://Y.Y.Y.Y:8888/-/readiness?token=ZZZZZZZZZZZZZ through Firefox browser on computer X.X.X.X returns proper JSON with expected data.
I don't have any idea what I can check more. Maybe there's lack of any more configuration in /etc/gitlab/gitlab.rb as it's quite old GitLab instance.

Varnish 5.2 started reporting "500 Internal Server Error"

I have been running Varnish for some time and about 6 months ago I added a Varnish 5.2 server that has been running perfectly.
A couple of weeks ago we have started to see odd "500 Internal Server Error" and when looking at older reports they suggest its the server running out of internal memory.
There was a suggestion to tune the parameters (which I have tried) but I am still getting the errors any suggestions on where to look?
Alan
PS The sugested tuning I saw was:
-p workspace_client=160k \
-p workspace_backend=160k \
The up the workspace elements from the default 64k, I tried 128k and then 160k but no change in the reported occasional issues.
You can control the "maximum number of HTTP header lines" varnish allows via the http_max_hdr option. The default is 64 and, in my case, setting it to 128 or 256 solved my problem. Note that, for some reason, the value needs to be set in power of two, so setting it to 100 or 150 will not allow varnish to restart.
https://varnish-cache.org/docs/4.1/reference/varnishd.html#http-max-hdr
After much playing and looking at varnish log:
sudo /usr/local/bin/varnishlog -n -q 'RespStatus eq 500'
I saw the error:
- RespHeader X-1-SM-None: None
- LostHeader X-1-ServerTXT: Live One
- Error out of workspace (req)
- LostHeader X-1-Cache: MISS
- Error out of workspace (Req)
- VCL_return deliver
- Timestamp Process: 1513078776.040695 0.419343 0.000086
- Error out of workspace (Req)
- LostHeader Accept-Ranges: bytes
- LostHeader Connection: keep-alive
- Error out of workspace (Req)
- Error workspace_client overflow
- RespProtocol HTTP/1.1
- RespStatus 500
- RespReason Internal Server Error
Realised that there were too many resp.http in vcl_deliver removing and commenting out some of them which I was using for debug the problem went away.

Gitlab: pushes registering with repo, but pipelines not running and projects dashbaord 'last updated' is not changed

When we push to our repository, we expect a pipeline to run. However, the pipelines have stopped starting automatically when we push.
In addition, when we try to start the pipeline manually, not all the tags and branches are showing in the dropdown list of tags and branches to choose from. When we browse the repository in Gitlab, we can see the branches and the pushed commits.
Finally, in the /dashboard/projects page, the 'last updated' date of the project is out of date, saying "yesterday" rather than "10 mins ago" (which is what shows when viewing the repository within the project.
We recently migrated server and so would expect that there is some migration issue going on here. Does anyone have any ideas where to look to solve this problem (i.e. what sub-systems could be not working/configured correctly to produce this behaviour)?
Gitlab version: 9.4.2
Run with Docker using: https://hub.docker.com/r/gitlab/gitlab-ce/
Update
I tailed the logs while pushing to the repository, what follows is a chunk of logs around that time (starting with the SSH connection for the push). Potentially the 404 around prometheus is interesting, but I'm not sure that's unexpected (we're not using it):
==> /var/log/gitlab/sshd/current <==
2017-08-01_17:05:16.86559 Accepted publickey for git from (removed) port 57680 ssh2: RSA SHA256:(removed)
==> /var/log/gitlab/gitlab-rails/production.log <==
Started POST "/api/v4/internal/allowed" for 127.0.0.1 at 2017-08-01 17:05:17 +0000
==> /var/log/gitlab/gitlab-shell/gitlab-shell.log <==
I, [2017-08-01T17:05:17.088866 #2286] INFO -- : POST http://127.0.0.1:8080/api/v4/internal/allowed 0.01170
I, [2017-08-01T17:05:17.089030 #2286] INFO -- : gitlab-shell: executing git command <git-receive-pack /var/opt/gitlab/git-data/repositories/products/preside-ext-ems.git> for user with key key-2.
==> /var/log/gitlab/sshd/current <==
2017-08-01_17:05:17.20480 Received disconnect from x.x.x.x port 57680:11: disconnected by user
2017-08-01_17:05:17.20483 Disconnected from x.x.x.x port 57680
==> /var/log/gitlab/gitlab-rails/production.log <==
Started GET "/-/metrics" for 127.0.0.1 at 2017-08-01 17:05:18 +0000
Processing by MetricsController#index as HTML
Filter chain halted as :validate_prometheus_metrics rendered or redirected
Completed 404 Not Found in 1ms (Views: 0.4ms | ActiveRecord: 0.0ms)
Started POST "/api/v4/jobs/request" for 172.17.0.1 at 2017-08-01 17:05:18 +0000
==> /var/log/gitlab/gitlab-workhorse/current <==
2017-08-01_17:05:18.16504 gitlab.mycompany.com # - - [2017-08-01 17:05:18.158505651 +0000 UTC] "POST /api/v4/jobs/request HTTP/1.1" 204 0 "" "gitlab-ci-multi-runner 9.4.1 (9-4-stable; go1.8.3; linux/amd64)" 0.006484
==> /var/log/gitlab/nginx/gitlab_access.log <==
172.17.0.1 - - [01/Aug/2017:17:05:18 +0000] "POST /api/v4/jobs/request HTTP/1.1" 204 0 "-" "gitlab-ci-multi-runner 9.4.1 (9-4-stable; go1.8.3; linux/amd64)"
==> /var/log/gitlab/gitlab-rails/production.log <==
Started POST "/api/v4/jobs/request" for 172.17.0.1 at 2017-08-01 17:05:23 +0000
==> /var/log/gitlab/gitlab-workhorse/current <==
2017-08-01_17:05:23.16534 gitlab.mycompany.com # - - [2017-08-01 17:05:23.159064793 +0000 UTC] "POST /api/v4/jobs/request HTTP/1.1" 204 0 "" "gitlab-ci-multi-runner 9.4.1 (9-4-stable; go1.8.3; linux/amd64)" 0.006235
==> /var/log/gitlab/nginx/gitlab_access.log <==
172.17.0.1 - - [01/Aug/2017:17:05:23 +0000] "POST /api/v4/jobs/request HTTP/1.1" 204 0 "-" "gitlab-ci-multi-runner 9.4.1 (9-4-stable; go1.8.3; linux/amd64)"
Not exactly an answer - but I have wiped out the server and rebuilt from scratch. Manually recreating each project and importing the repositories for each project.
A royal PITA, but everything is working as expected.
I can only guess that either something was setup on the host server that was causing issues (I did a clean install on the host to start again), or that there was something about simply copying over all our configuration and data dirs from the old server to the new server that caused issues (seems unlikely).
Not much help I'm afraid :(

Convert hostnames to IP addresses in an Apache access log file

I want to be able to convert hostnames to IP addresses in an Apache access log file (i.e. the opposite of what logresolve does).
I have an accesslog file that has been converted with logresolve but I want to revert it.
Each line starts, as an example:
hostname.com - - [01/Jan/2016:00:00:00 +0000] "GET /stuff HTTP/1.1" 200 1046 "http://mywebsite.com" "Mozilla/5.0 (Windows NT 6.1)"
How do I convert hostname.com to an IP address for every line?

Per request logging in Node.js

I am an experienced Java developer picking up Node.js and making the shift to the asynchronous model. Most things are going fine except for logging. I cannot find anything similar to log4j and NDCs in Java while developing in Node.js with express.
My goal is to have each log statement automatically prepend the following information:
[2013-11-07 11:17:04.615 serverScript INFO 7036 192.168.7.209]
This includes the timestamp, name of the js file writing this statement (for modularized node apps), the debug level, the process ID (running clusters), and the client's IP address.
I can get it to write these when initially coming into my request handler, but without propagating a bunch of parameters to every called function, the logger statements inside the subroutines don't have the info. I know I can create an instance of my logger inside each js file that initializes its name, but I've yet to figure out a solution for the IP address of the client. For longer running requests, the address I set in my logger gets overwritten when the next request comes in, so the IPs that are logged get crossed.
I've looked at winston but have not been able to solve this issue even with it. Has anyone accomplished this? It is very useful tracking field issues when you can filter by IP to view only one user's activity.
[edit: test from parameter passing solution until I learn the syslog way]
[2013-11-07 14:29:28.641 server INFO 7527 192.168.7.209] Got request from 192.168.7.209 for /ionmed/executeQuery?
[2013-11-07 14:29:28.641 router INFO 7527 192.168.7.209] About to route a request for /ionmed/executeQuery, method=POST
[2013-11-07 14:29:28.642 router INFO 7527 192.168.7.209] getting POSTed data
[2013-11-07 14:29:28.642 router INFO 7527 192.168.7.209] POST params: {"sqlQuery":"select sleep(10)","sessionStart":"1383852558799","rand":"0.5510970998368581","jsessionid":"117DBAA89F599D923AF80D4AB171BDDF"}
[2013-11-07 14:29:28.642 requestHandlers INFO 7527 192.168.7.209] 'query' was called.
[2013-11-07 14:29:28.642 requestHandlers INFO 7527 192.168.7.209] select sleep(10)
[2013-11-07 14:29:30.673 server INFO 7527 192.168.7.217] Got request from 192.168.7.217 for /
[2013-11-07 14:29:30.673 router INFO 7527 192.168.7.217] About to route a request for /, method=GET
[2013-11-07 14:29:30.673 router INFO 7527 192.168.7.217] No request handler found for /; serving as file
[2013-11-07 14:29:30.673 router INFO 7527 192.168.7.217] Request handler 'serveFile' was called to get: /index.html
[192.168.7.217 Thu, 07 Nov 2013 19:29:30 GMT] HTTP/1.1 GET "/node/" 200 "Mozilla/5.0 (iPod; CPU iPhone OS 5_1_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B206 Safari/7534.48.3"
[2013-11-07 14:29:33.578 server INFO 7527 192.168.7.217] Got request from 192.168.7.217 for /
[2013-11-07 14:29:33.578 router INFO 7527 192.168.7.217] About to route a request for /, method=GET
[2013-11-07 14:29:33.578 router INFO 7527 192.168.7.217] No request handler found for /; serving as file
[2013-11-07 14:29:33.579 router INFO 7527 192.168.7.217] Request handler 'serveFile' was called to get: /index.html
[192.168.7.217 Thu, 07 Nov 2013 19:29:33 GMT] HTTP/1.1 GET "/node/" 200 "Mozilla/5.0 (iPod; CPU iPhone OS 5_1_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B206 Safari/7534.48.3"
[2013-11-07 14:29:38.644 requestHandlers INFO 7527 192.168.7.209] sending response
[192.168.7.209 Thu, 07 Nov 2013 19:29:38 GMT] HTTP/1.1 POST "/node/ionmed/executeQuery?" 200 "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:25.0) Gecko/20100101 Firefox/25.0"
[2013-11-07 14:29:41.540 server INFO 7527 192.168.7.217] Got request from 192.168.7.217 for /
[2013-11-07 14:29:41.541 router INFO 7527 192.168.7.217] About to route a request for /, method=GET
[2013-11-07 14:29:41.541 router INFO 7527 192.168.7.217] No request handler found for /; serving as file
[2013-11-07 14:29:41.541 router INFO 7527 192.168.7.217] Request handler 'serveFile' was called to get: /index.html
[192.168.7.217 Thu, 07 Nov 2013 19:29:41 GMT] HTTP/1.1 GET "/node/" 200 "Mozilla/5.0 (iPod; CPU iPhone OS 5_1_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B206 Safari/7534.48.3"
[2013-11-07 14:29:45.146 server INFO 7527 192.168.7.209] RLz6tmJ7KTH2R16VCVTX: bye {"user":"1"}
[2013-11-07 14:29:45.176 server INFO 7527 192.168.7.209] RLz6tmJ7KTH2R16VCVTX: disconnected
Now I just need to figure out how to get the express request logger to be in the same line entry format as my internal logger until it is all moved to rsyslog.
I got into this same problem some time ago, and finally I could spend sometime researching it. The #ibash approach and his post put me in the lead to solve the problem I had (thanks for your help). I only walked some steps more in order to print in the logs automatically a unique id per request.
In your case you can add origin and destination IP and all information needed to each request, using same approach and print it automatically in all logs.
My approach:
- As #ibash explained, I used continuation-local-storage to share information among all the modules per request. So I generate a unique id per request and store it in a namespace created with this library
- I wrapped the Winston library (in a very simple way) in order to recover the information from the namespace shared and override all Winston methods I use adding to the string the unique Id. Obviously in your case you should add all the info you need and you have stored previously in the namespace of the library.
As the problem was a little complex to explain to people no familiarize with all these things, I wrote it down in a post with a clear example that you can reuse if you want. Winston wrap could be really useful:
Express.js: Logging info with global unique request ID – Node.js
I hope you can reuse my code and perhaps in the future Express implements a solution for this.
These instructions were from an Ubuntu 12.04 distribution I set up, but they should apply pretty closely to RHEL, Fedora, CentOS, etc.
Rsyslog is a system logging utility you can use to log messages from any program on a Linux machine. First you need to find your rsylog configuration information. You can do that with the following command:
sudo find / -name rsyslog.conf
If you can't find the configuration file, you can list the service running to see if rsyslog is even on your machine with the following command:
service --status-all
Now open the file it finds and do the following:
Comment out the line $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
Uncomment $ModLoad imtcp
Uncomment $InputTCPServerRun and specify port number 1514, going to use 1514 b/c Ubuntu 12.04 rsyslog has a problem dropping permissions if I use port 514, other distributions don't have similar issues and you could keep default port #. I get around this by using iptables to reroute port 514 traffic to 1514
Change $FileCreateMode 0640 to 0644
Now I created a file named /etc/rsyslog.d/10.conf (this is a secondary configuration file for rsyslog where we can filter message, name log files, etc) and added the following to it:
$template DailyPerHostLogs,"/var/log/MyLogFile_%$YEAR%_%$MONTH%_%$DAY%.log"
#:msg,contains,"MsgName" -?DailyPerHostLogs
*.* -?DailyPerHostLogs
&~
This file creates a new file for each day and finds any message sent with MsgName in the text and puts it into the daily file and then removes it from the queue to be logged by any other log requests so we don't double log it.
Now you can reboot the machine you are working on and it all should be working. You can check this by looking for files in /var/log as defined in 10.conf above. Hit the logger from the command line by issuing the following commands:
logger this is from the command line
echo "this is from the tcp port" > /dev/tcp/127.0.0.1/1514
You should see both those lines pop up in the log file. If you get that, then let's move on to the node module that will be able to hit the log.
var net = require('net');
var client = net.connect({port: this.1514}, function(){ console.log("Open"); });
client.write(' ' + "sMsgName: What"+ ' ' + "hath" + ' ' + "God wrought?" + '\n');
//Do everything else your program needs. . .
The '\n' on the write tells rsyslog we are done with this line. Also, you will need to prepend a space for the filtering to work: http://www.rsyslog.com/log-normalization-and-the-leading-space/
The devil is always in the details with a setup like this, but I think this will get you most of the way there and google searching will get you the rest of the way.
Answering this as I just wrote a post on how to use continuation-local-storage to save a "transaction id" with every log (without manually propagating it). You can do the same for the client ip, process id, etc.
Follow this post: https://datahero.com/blog/2014/05/22/node-js-preserving-data-across-async-callbacks/
But instead of just saving a transaction id, you'll want these as well:
request.connection.remoteAddress and process.pid
Let me know if you have any questions here or there, and I'll answer them.

Resources