Trouble running Airflow on reboot - linux

We're running an Ubuntu 15.10 virtual machine. I edited the user crontab to have the following lines:
#reboot /usr/local/bin/airflow schedule -D
#reboot /usr/local/bin/airflow webserver -D
In the syslog I get the following lines:
Feb 16 10:48:58 SERVERNAME cron[723]: (CRON) INFO (Running #reboot jobs)
Feb 16 10:48:58 SERVERNAME CRON[748]: (username) CMD (airflow schedule -D)
Feb 16 10:48:58 SERVERNAME CRON[749]: (username) CMD (airflow webserver -D)
If I run those lines while logged in they work, but not on restart. I'm not all that skilled at Linux, so I'm assuming there's something easy I'm missing here.
I get this output sent to my "Mail" on restart.
X-Original-To: analytics
Delivered-To: analytics#PARKAT1TEST
Received: by PARKAT1TEST (Postfix, from userid 1005)
id 78011101C51; Thu, 16 Feb 2017 12:15:47 -0600 (CST)
From: root#PARKAT1TEST (Cron Daemon)
To: analytics#PARKAT1TEST
Subject: Cron <analytics#PARKAT1TEST> /usr/local/bin/airflow webserver -D
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Cron-Env: <SHELL=/bin/sh>
X-Cron-Env: <HOME=/home/analytics>
X-Cron-Env: <PATH=/usr/bin:/bin>
X-Cron-Env: <LOGNAME=analytics>
Message-Id: <20170216181547.78011101C51#PARKAT1TEST>
Date: Thu, 16 Feb 2017 12:15:47 -0600 (CST)
[2017-02-16 12:15:45,610] {__init__.py:36} INFO - Using executor SequentialExecutor
[2017-02-16 12:15:45,879] {driver.py:120} INFO - Generating grammar tables from /usr/lib/python3.4/lib2to3/Grammar.txt
[2017-02-16 12:15:45,908] {driver.py:120} INFO - Generating grammar tables from /usr/lib/python3.4/lib2to3/PatternGrammar$
____________ _____________
____ |__( )_________ __/__ /________ __
____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / /
___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ /
_/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/
[2017-02-16 12:15:47,033] {models.py:154} INFO - Filling up the DagBag from /home/analytics/airflow/dags
Running the Gunicorn server with 4 syncworkers on host 0.0.0.0 and port 8080 with a timeout of 120...
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 15, in <module>
args.func(args)
File "/usr/local/lib/python3.4/dist-packages/airflow/bin/cli.py", line 423, in webserver
'gunicorn', run_args
File "/usr/lib/python3.4/os.py", line 523, in execvp
_execvpe(file, args)
File "/usr/lib/python3.4/os.py", line 568, in _execvpe
raise last_exc.with_traceback(tb)
File "/usr/lib/python3.4/os.py", line 558, in _execvpe
exec_func(fullname, *argrest)
FileNotFoundError: [Errno 2] No such file or directory
All the files registered in the error message exist; and it still runs fine when logged in and run manually.

Related

Invalid numeric literal when running jq from script via crontab

I have a shell script that runs fine from the command line but throws a error when it's run from a cronjob. What could be causing this error?
The following includes the cron, the script, and the error I'm getting in /var/spool/mail.
[jira-svc ~]$ cat jira_trigger_updater.sh
#!/usr/bin/sh
tmp_file=/tmp/merge-issues/$(date --iso-8601=minutes).txt
mkdir -p /tmp/merge-issues
/usr/bin/curl -s -X GET -H "Content-Type: application/json" "https://services-gateway.g054.usdcag.aws.ray.com/project-management/rest/api/2/search?jql=filter%3D14219&fields=key,status,fixVersions" -u jira-svc:${UPDATE_TRIGGER_PASSWORD} > ${tmp_file}
/usr/bin/jq -r '.issues[] | [.key , .fields.status.name , .fields.fixVersions[].name] | join(",")' ${tmp_file} > /rational/triggers/inputs/jira_merge.csv
/usr/bin/chmod 644 /rational/triggers/inputs/jira_merge.csv
#rm -rf /tmp/merge-issues
[jira-svc ~]$ crontab -l
#*/1 * * * * /usr/bin/sh /home/jira-svc/jira_trigger_updater.sh
[jira-svc# ~]$ tail -25 /var/spool/mail/jira-svc
From jira-svc#cc01-217-136.localdomain Tue Feb 8 20:10:02 2022
Return-Path: <jira-svc#cc01-217-136.localdomain>
X-Original-To: jira-svc
Delivered-To: jira-svc#cc01-217-136.localdomain
Received: by cc01-217-136.localdomain (Postfix, from userid 1001)
id 9C40168152B5; Tue, 8 Feb 2022 20:10:02 +0000 (UTC)
From: "(Cron Daemon)" <jira-svc#cc01-217-136.localdomain>
To: jira-svc#cc01-217-136.localdomain
Subject: Cron <jira-svc#cc01-217-136> /usr/bin/sh /home/jira-svc/jira_trigger_updater.sh
Content-Type: text/plain; charset=UTF-8
Auto-Submitted: auto-generated
Precedence: bulk
X-Cron-Env: <XDG_SESSION_ID=2172>
X-Cron-Env: <XDG_RUNTIME_DIR=/run/user/1001>
X-Cron-Env: <LANG=en_US.UTF-8>
X-Cron-Env: <SHELL=/bin/sh>
X-Cron-Env: <HOME=/home/jira-svc>
X-Cron-Env: <PATH=/usr/bin:/bin>
X-Cron-Env: <LOGNAME=jira-svc>
X-Cron-Env: <USER=jira-svc>
Message-Id: <20220208201002.9C40168152B5#cc01-217-136.localdomain>
Date: Tue, 8 Feb 2022 20:10:02 +0000 (UTC)
parse error: Invalid numeric literal at line 13, column 0
[jira-svc ~]$
Cron runs jobs from a non-ineractive, non-login shell and doesn't load environment variables from files like ~/.bashrc, ~/.bash_profile, /etc/profile, and others. You must source these files if you want to include the environment variables defined in them.

scapy.error.Scapy_Exception: Can't attach the BPF filter

I found a WiFi scanner written in python on youtube.
https://www.youtube.com/watch?v=DFTwB2nAexs
Direct GitHub script link: https://github.com/davidbombal/red-python-scripts/blob/main/lanscan_arp.py
But I'm having an issue with BPF filter as "scapy.error.Scapy_Exception: Can't attach the BPF filter !"
script
#!/usr/bin/env python3
# Import scapy
import scapy.all as scapy
# We need to create regular expressions to ensure that the input is correctly formatted.
import re
# Basic user interface header
print(
r"""______ _ _ ______ _ _
| _ \ (_) | | | ___ \ | | | |
| | | |__ ___ ___ __| | | |_/ / ___ _ __ ___ | |__ __ _| |
| | | / _` \ \ / / |/ _` | | ___ \/ _ \| '_ ` _ \| '_ \ / _` | |
| |/ / (_| |\ V /| | (_| | | |_/ / (_) | | | | | | |_) | (_| | |
|___/ \__,_| \_/ |_|\__,_| \____/ \___/|_| |_| |_|_.__/ \__,_|_|"""
)
print("\n****************************************************************")
print("\n* Copyright of David Bombal, 2021 *")
print("\n* https://www.davidbombal.com *")
print("\n* https://www.youtube.com/davidbombal *")
print("\n****************************************************************")
# Regular Expression Pattern to recognise IPv4 addresses.
ip_add_range_pattern = re.compile("^(?:[0-9]{1,3}\.){3}[0-9]{1,3}/[0-9]*$")
# Get the address range to ARP
while True:
ip_add_range_entered = input(
"\nPlease enter the ip address and range that you want to send the ARP request to (ex 192.168.1.0/24): "
)
if ip_add_range_pattern.search(ip_add_range_entered):
print(f"{ip_add_range_entered} is a valid ip address range")
break
# Try ARPing the ip address range supplied by the user.
# The arping() method in scapy creates a pakcet with an ARP message
# and sends it to the broadcast mac address ff:ff:ff:ff:ff:ff.
# If a valid ip address range was supplied the program will return
# the list of all results.
arp_result = scapy.arping(ip_add_range_entered)
output
Please enter the ip address and range that you want to send the ARP request to (ex 192.168.1.0/24): 192.168.1.0/24
192.168.1.0/24 is a valid ip address range
Traceback (most recent call last):
File "/Users/belgra/Development/WiFi Scanner/lan_scan_arp.py", line 41, in <module>
arp_result = scapy.arping(ip_add_range_entered)
File "/opt/homebrew/lib/python3.10/site-packages/scapy/layers/l2.py", line 734, in arping
ans, unans = srp(
File "/opt/homebrew/lib/python3.10/site-packages/scapy/sendrecv.py", line 675, in srp
s = iface.l2socket()(promisc=promisc, iface=iface,
File "/opt/homebrew/lib/python3.10/site-packages/scapy/arch/bpf/supersocket.py", line 254, in __init__
super(L2bpfListenSocket, self).__init__(*args, **kwargs)
File "/opt/homebrew/lib/python3.10/site-packages/scapy/arch/bpf/supersocket.py", line 119, in __init__
attach_filter(self.ins, filter, self.iface)
File "/opt/homebrew/lib/python3.10/site-packages/scapy/arch/bpf/core.py", line 155, in attach_filter
raise Scapy_Exception("Can't attach the BPF filter !")
scapy.error.Scapy_Exception: Can't attach the BPF filter !
/Users/belgra/Development/WiFi Scanner ❯
I installed scapy 2.4.5 and am running this code by Python 3.10.1 on M1 Mac.
Any ideas?
I was able to solve this by installing the optional libpcap library that Scapy mentions in its installation documentation.
Run brew update in your terminal
Run brew install libpcap in your terminal
Run Scapy with scapy in your terminal
Within Scapy run conf.use_pcap = True
Here's the link to the documentation with more info
For reference I am running an M1 MacBook Air (macOS Monterey v12.1) with python 3.8.12.
I got the same error on a M1 MacBook and I solved this by installing and configuring libpcap as #RBPEDIIIAL suggested.

Jmeter executing scripts but provides blank report

I am running jmeter in non GUI mode in my linux server inside docker. When I check jmeter is installed or not it says its there with the version but when I execute my script it says as follows-
root#xxxxxxx:/# /var/xxxxx/apache-jmeter-5.1/bin/jmeter -n -t /lib/xxx/deduction.jmx -l test.jtl
Creating summariser <summary>
Created the tree successfully using /lib/xxx.s/deduction.jmx
Starting the test # Tue May 14 05:54:53 UTC 2019 (1557813293320)
Waiting for possible Shutdown/StopTestNow/HeapDump/ThreadDump message on port 4445
summary = 0 in 00:00:00 = ******/s Avg: 0 Min: 9223372036854775807 Max: -9223372036854775808 Err: 0 (0.00%)
Tidying up ... # Tue May 14 05:55:53 UTC 2019 (1557813353945)
... end of run
The same file works fine in my windows machine.
root#xxxxxx:/# /var/xxxxxx/apache-jmeter-5.1/bin/jmeter -v
_ ____ _ ____ _ _ _____ _ __ __ _____ _____ _____ ____
/ \ | _ \ / \ / ___| | | | ____| | | \/ | ____|_ _| ____| _ \
/ _ \ | |_) / _ \| | | |_| | _| _ | | |\/| | _| | | | _| | |_) |
/ ___ \| __/ ___ \ |___| _ | |___ | |_| | | | | |___ | | | |___| _ <
/_/ \_\_| /_/ \_\____|_| |_|_____| \___/|_| |_|_____| |_| |_____|_| \_\ 5.1 r1853635
Copyright (c) 1999-2019 The Apache Software Foundation
I experienced that also. In my case, the problem was the path set on JMeter component “CSV Data Set Config”. When I executed the project at my machine, the Jmeter searched and found the path, but after upload to the server I forgot to change the CSV path according to the server environment. After change it the tests were executed Ok on server. So, check if there is something like that on your JMeter project.

Arangodb connection refused error # 61

im very new with arango, but when i try to start the server with arangosh i get this
__ _ _ __ __ _ _ __ __ _ _ _| |__
/ | '__/ _ | ' \ / _` |/ _ / | '_ \
| (| | | | (| | | | | (| | () __ \ | | |
__,|| __,|| ||_, |_/|/| ||
|/
arangosh (ArangoDB 3.3.3 [darwin] 64bit, using jemalloc, VPack 0.1.30, RocksDB 5.6.0, ICU 58.1, V8 5.7.492.77, OpenSSL 1.0.2n 7 Dec 2017)
Copyright (c) ArangoDB GmbH
Pretty printing values.
Could not connect to endpoint 'http+tcp://127.0.0.1:8529', database: '_system', username: 'root'
Error message: 'Could not connect to 'http+tcp://127.0.0.1:8529' 'connect() failed with #61 - Connection refused''
using the newest version
It looks like the arangod server is not running on the same machine (127.0.0.1) on port 8529.
Can you verify it is actually running, and its port number is actually 8529? This is the default port, but it can be adjusted in the server configuration file (arangod.conf).
This error occured because arangoDB server is not running. In macOS You have to start server by running this command.
/usr/local/sbin/arangod &.

Linux Shell Script to extract data of email sent and mail back to admin

Here's the extract of the log file:
Jan 18 02:30:11 qaapp2 sendmail[3126]: q0I7UBoS00312: to=, ctladdr= (10021/10000), delay=00:00:00, xdelay=00:00:00, mailer=esmtp, pri=120448, relay=buf-ex02.cymfony.com. [10.1.6.37], dsn=2.0.0, stat=Sent ( <201201180730.q0I7UBVW00312#qaapp2.cymfony.com> Queued mail for delivery)
Jan 18 02:31:11 qaapp2 sendmail[3510]: q0I7VBOx00350: to=, ctladdr= (10021/10000), delay=00:00:00, xdelay=00:00:00, mailer=esmtp, pri=120453, relay=buf-ex02.cymfony.com. [10.1.6.37], dsn=2.0.0, stat=Sent ( <201201180731.q0I7VBei00350#qaapp2.cymfony.com> Queued mail for delivery)
Jan 18 06:43:44 qaapp2 sendmail[442]: q0IBhisf00044: to=, ctladdr= (0/0), delay=00:00:00, xdelay=00:00:00, mailer=esmtp, pri=120450, relay=buf-ex02.cymfony.com. [10.1.6.37], dsn=2.0.0, stat=Sent ( <201201181143.q0IBhiSG00043#qaapp2.cymfony.com> Queued mail for delivery)
I want to know how many mails are sent to user xyz#gmail.com date wise from the log file located at the path /var/log/maillog file.
Any help is appreciated.
Whenever something needs counting, wc is your friend:
grep 'to=<xyz#gmail.com>' /var/log/maillog | wc -l
You can check it via command mentioned below:
grep -i "to=<xyz#gmail.com" /var/log/maillog|wc -l; grep -i "to=<xyz#gmail.com" /var/log/maillog| awk '{print $1,$2,$3,$7,$13}'
In the above command, the first line will printed as number of mails that has been sent to the ID = xyz#gmail.com and beneath to it, you will have 'month', 'date', 'time', 'ID' followed by its 'status' of delivery.

Resources