I'm trying to add zabbix server support for a service that is written in Python. This service should send metrics to a zabbix server in active mode. E.g. the service connects to the server periodically, not the other way. (The service can be operated behind firewalls, the only option is to use active mode.)
In the host.create API call, I'm required to give the interfaces for the host. Here is the documentation for that: https://www.zabbix.com/documentation/3.4/manual/api/reference/host/create - the interfaces parameter is required. If I try to give an empty list:
zapi = ZabbixAPI(cfg.url)
zapi.login(cfg.user, cfg.password) # I'm using an administrator user here!
host = zapi.host.create(
host=cfg.host_name,
description=cfg.host_description,
inventory_mode=1, # auto host inventory population
status=0, # monitored host
groups=[host_group_id],
interfaces=[], # active agent, no interface???
)
Then I get this error:
pyzabbix.ZabbixAPIException: ('Error -32500: Application error., No permissions to referred object or it does not exist!', -32500)
I can create hosts using the same user and the zabbix web interface, so I guess the problem is with the interfaces. So I have tried to create an interface first. However, the hostinterface.create method requires a hostid parameter.
See here: https://www.zabbix.com/documentation/3.4/manual/api/reference/hostinterface/create - I must give a hostid.
This is catch 22 - In order to create a host, I need to have a host interface. But to create a host interface, I need to have a host.
What am I missing? Maybe I was wrong and the host.create API call was rejected because of a different reason. How can I figure out what it was?
The host create api will create the hostinterface as well, you need to populate interfaces[] with the correct fields acccording to the documentation
For instance, add this before calling the api:
interfaces = []
interfaces.append( {
'type' : 2,
'main' : 1,
'useip': 1,
'ip' : '1.2.3.4',
'dns' : "",
'port' : '161'
} )
then pass it to the host create api
The referenced documentation not show explicity but, in Zabbix, one host need to have:
- One or more interfaces (active hosts need too)
- One or more host group
So for your code work you will need to change to someting like this:
zapi = ZabbixAPI(cfg.url)
zapi.login(cfg.user, cfg.password) # I'm using an administrator user here!
host = zapi.host.create(
host=cfg.host_name,
description=cfg.host_description,
inventory_mode=1, # auto host inventory population
status=0, # monitored host
groups=[host_group_id],
interfaces=[ {"type": "1",
"main": "1",
"useip": "1",
"ip": "127.0.0.1",
"dns": "mydns", # can be blank
"port": "10051"}],
)
In your case is a "active host" but in Zabbix the concept for Active/Passive is for item, not for hosts. So its possible (and not very unusual) have hosts with passive and active itens at same time.
Related
I have 3 servers (one of them with Windows Server 2012 R2 and 2 with Windows Server 2019) and I use Azure FileSync to sync files between them.
Since a few days I have a problem, the 2012 R2 server is appearing offline in the azure portal (it shows "no activity"). I tried the Test-StorageSyncNetworkConnectivity cmdlet and it fails with the following message:
Discovery service connectivity result:
Result: Success
HostUri: unknown
HostIPv4Addr: Fail. DNS name does not exist. Resolution through GetAddrInfo failed with error: 11001
HostIPv6Addr: Fail. DNS name does not exist. Resolution through GetAddrInfo failed with error: 11001
Management service connectivity result:
Result: Fail. Failed to run test
HostUri: unknown
HostIPv4Addr: Fail. DNS name does not exist. Resolution through GetAddrInfo failed with error: 11001
HostIPv6Addr: Fail. DNS name does not exist. Resolution through GetAddrInfo failed with error: 11001
HostNetworkLatency [min,avg,max]: Network Latency Request Failed.
Monitoring service connectivity result:
Result: No response from monitoring agent process.
HostUri: unknown
HostIPsAddr: IPv4 and Ipv6 addresses do not exist
ServerEndpoint: faf66731-1e22-47eb-93eb-b8d3331f0de2
SyncServiceResult:
SyncServiceHostUri:
SyncServiceHostIPsAddr: IPv4 and Ipv6 addresses do not exist
SyncServiceHostNetworkLatency: Request Failed.
ServerEndpoint: 80f3bb96-463b-4f86-9e26-8dcf0c92f915
SyncServiceResult:
SyncServiceHostUri:
SyncServiceHostIPsAddr: IPv4 and Ipv6 addresses do not exist
SyncServiceHostNetworkLatency: Request Failed.
ServerEndpoint: b9a874b4-7acd-4174-b5e8-26ac23c84c7e
SyncServiceResult:
SyncServiceHostUri:
SyncServiceHostIPsAddr: IPv4 and Ipv6 addresses do not exist
SyncServiceHostNetworkLatency: Request Failed.
Remediation Steps
For Azure File Sync to work correctly, you will need to configure your servers to communicate with multiple Azure servic
es
Refer the following public document for details on proxy settings or firewall settings for Azure File Sync - https://aka
.ms/AFS/ProxyAndFirewall
If you have configured a private endpoint refer the following public document for configuring private endpoint for Azure
File Sync - https://aka.ms/AFS/PrivateEndpoint
NetworkTestPassed Report
----------------- ------
False ...
The problem seems to be DNS related, but I tried the Test-NetConnection -ComputerName <remote-host> -Port 443 cmdlet with the correct URLs (taken from https://learn.microsoft.com/it-it/azure/storage/file-sync/file-sync-firewall-and-proxy#test-network-connectivity-to-service-endpoints) and all the endpoints seems to be working fine (the ping is failing but I think that is regular behavior. E.g.:
PS C:\Program Files\Azure\StorageSyncAgent> Test-NetConnection -ComputerName tm-kailani7.one.microsoft.com -Port 443
AVVISO: Ping to tm-kailani7.one.microsoft.com failed -- Status: TimedOut
ComputerName : tm-kailani7.one.microsoft.com
RemoteAddress : 20.38.85.153
RemotePort : 443
InterfaceAlias : Ethernet 2
SourceAddress : 192.168.0.185
PingSucceeded : False
PingReplyDetails (RTT) : 0 ms
TcpTestSucceeded : True
I also tried the FileSyncErrorsReport.ps1 but even that doesn't give me any error:
AVVISO: There are no file sync errors to report. Either the last completed sync session did not have per-item errors or
the ItemResults event log on the server wrapped due to too many per-item errors and the event log no longer contains
errors for this sync group. To learn more, see the Azure File Sync troubleshooting documentation:
https://aka.ms/AFS/FileSyncErrorReport
I think the problem lies with the fact that the AzureStorageSyncMonitor.exe process is not running and if i try to run it manually it just closes itself after a few seconds.
I've got no event ID 9301 (specified here: https://learn.microsoft.com/it-it/azure/storage/file-sync/file-sync-troubleshoot?tabs=portal1%2Cazure-portal#server-endpoint-health) and by searching in the other folder of eventvwr i could only find the event 4104 which shows me some error dated to the last time the server has reached the Azure endpoint:
Querying for new jobs failed.
HttpErrorCode: 0x80C8700C
InternalErrorCode: 0x80C80300
Any help would be greatly appreciated, thank you.
• Kindly please check the event ID 9302 in the ‘FileSync’ telemetry logs under ‘Application and Service Logs’ in the event viewer for the active sync sessions logged every 5 to 10 minutes and check whether it is making any progress as the ‘AzureStorageSyncMonitor.exe’ utility synchronizes the status of the Server endpoint to the storage sync service in the portal.
• You can also check the ‘Perfmon.msc’, i.e., performance counter which is built-in to the Azure File Sync to monitor the sync activity locally on the server.
• Finally, please check the Server’s configured IP address settings too as you are encountering the DNS resolution issue while trying to execute the ‘Test-StorageSyncNetworkConnectivity’ command. In the IP address settings, please check whether the configured DNS server IP addresses (Preferred and Secondary) are configured correct and are reachable.
Also, check the ‘localhosts’ file in the ‘C:\Windows\System32\drivers\etc’ path whether it contains the correct IP address of the server, i.e., Windows Server 2012 R2 and its expected DNS hostname as various services on the server itself including the ‘AzureStorageSyncMonitor’ refer the ‘localhosts’ file for sending DNS requests to the connected/configured external services and for communicating between the internal services also.
• Finally, would suggest you to please disable negative caching on the DNS client, put the suffix with the matching host A record as the last entry in the suffix search list and use the ‘AF_UNSPEC’ for the family and let your code determine the ‘A/AAAA’ results for you.
For more detailed information on this, kindly refer to the below link: -
https://learn.microsoft.com/en-us/troubleshoot/windows-server/networking/getaddrinfo-fails-error-11001-call-af-inet6-family#workaround
When I publish a service with a VIP, the advertised address does not route properly to the advertised port. For example, for a MariaDB Galera 3-node cluster service with a VIP specified as:
"labels": {
"VIP_0": "/mariadb-galera:3306"
}
On the configuration tab of the service page (and according to the docs), the load balanced address is:
mariadb-galera.marathon.l4lb.thisdcos.directory:3306
I can ping the DNS name just fine, but...
When I try to connect a front-end service (Drupal7, wordpress) to consume this load balanced address:port combination, there will be numerous connection failures and timeouts. It isn't that it never works but that it works quite sporadically, if at all. Drupal7 dies almost immediately and starts kicking up Bad Gateway errors.
What I have found through experimentation is that if I specify a hostPort for the service in question, the load balanced address will work as long as I use the hostPort value, and not the advertised load balanced service port as above. In this specific case I specified a hostPort of 3310.
"network":"USER",
"portMappings": [
{
"containerPort": 3306,
"hostPort": 3310,
"servicePort": 10000,
"name": "mariadb-galera",
"labels": {
"VIP_0": "/mariadb-galera:3306"
}
}
Then if I use the load balanced address (mariadb-galera.marathon.l4lb.thisdcos.directory) with the host port value (3310) in my Drupal7 settings.php, the front end connects and works fine.
I've noticed similar behaviour with custom applications connecting to mongodb backends also in a DC/OS environment... it seems the load balanced address/port combination specified never works reliably... but if you substitute the hostPort value, it does.
The docs clearly state that:
address and port is load balanced as a pair rather than individually.
(from https://docs.mesosphere.com/1.9/networking/dns-overview/)
Yet I am unable to effectively connect when I specify the VIP designated port. Yet IT DOES WORK when I use the hostPort (and will not work at all unless I designate a specific hostPort in the service definition json). Wether or not this approach is actually load balanced remains a question to me based on the wording in the documentation.
I must be doing something wrong, but I am at a loss... any help is appreciated.
My cluster nodes are VMWare virtual machines.
The VIP label shouldn't start with a slash:
"container": {
"portMappings": [
{
"containerPort": 3306,
"name": "mariadb-galera",
"labels": {
"VIP_0": "mariadb-galera:3306"
}
}
}
should be available as <VIP label>.marathon.l4lb.thisdcos.directory:<VIP port> in this case:
mariadb-galera.marathon.l4lb.thisdcos.directory:3306
you can test it using nc:
nc -z -w5 mariadb-galera.marathon.l4lb.thisdcos.directory 3306; echo $?
The command should return 0.
When you're not sure about exported DNS names you can list all of them from any DC/OS node:
curl -s http://198.51.100.1:63053/v1/records | grep mariadb-galera
How can we write a puppet manifest code that identifies whether a service(httpd) is running or not on puppet clients/agents. And if not, it should start that service and send out an email ?
class apache {
package { mysql-server: ensure => installed }
if hasstatus == "false" {
service { "mysql":
ensure => running,
require => Package["mysql-server"],
}
}
}
node default {
include apache
}
I know this is not a correct code. But I want to check hasstatus first and if the service status is false then I want to start service and send out an email.
Thanks
Sanket Dangi
I have configured tagmail.conf in puppet master and have also enabled puppet reports but not able to receive mails to my gmail account. I can see puppet agent reports on puppet master but not receiving mails. Do I need to configure mail server for this ?
My Tagmail Conf :
all: xxxxxxx#gmail.com
Puppet isn't an imperative shell script where you need to check the value of X before performing action Y that gets you to state Z. Instead, you specify that you want state Z and Puppet checks the current state and handles the transition.
What this means is that you don't need to check the status of a service before deciding whether to start it or not and instead you declare that the mysql service should be running and Puppet ensures this is the case.
Simply have this in your manifest alongside the package line:
service { "mysql":
ensure => running,
enable => true,
require => Package["mysql-server"],
}
The require line ensures the package is installed before evaluating or starting the service.
To send out notifications you can use the tagmail reporting feature in Puppet. First set up a tagmail file (reference docs) like this at /etc/puppet/tagmail.conf on the master:
mysql, apache: wwwadmins#example.com
And in the master's puppet.conf, set:
[master]
reports = tagmail
Ensure clients have report enabled in puppet.conf:
[agent]
report = true
This should then trigger e-mails relating to any resources with the "mysql" or "apache" tags (class names, module names etc).
I am attempting to obtain a data feed from yahoo finance. I am doing this with the following code:
System.Net.WebRequest request = System.Net.WebRequest.Create(http://download.finance.yahoo.com/download/quotes.csv?format=sl&ext=.csv&symbols=^ftse,^ftmc,^ftas,^ftt1x,^dJA);
request.UseDefaultCredentials = true;
// set properties of the request
using (System.Net.WebResponse response = request.GetResponse())
{
using (System.IO.StreamReader reader = new System.IO.StreamReader(response.GetResponseStream()))
{
return reader.ReadToEnd();
}
}
I have placed this code into a console application and, using Console.WriteLine on the output I receive the information I require. I have used the 'Run as..' command to execute this using a specific domain account.
When I use this code from within a Page load I receive the following error message "No connection could be made because the target machine actively refused it 76.13.114.90:80".
This seems to suggest that the call is reaching yahoo (is this true?) and that there is something missing.
This would suggest there is an identity difference in the calls between the console application and application pool.
Environment is: Windows Server 2003, IIS 6.0, .net 4.0
"Target machine actively refused it" indicates that the TCP connection itself is not succeeding. This could be due to the fact that the Proxy settings when run under IIS are not the same as those that apply when you run in the console.
You can fix this by setting a WebProxy on your request, that points to the proxy server being used in the environment.
Yes, an active refusal is indication that the target machine is receiving the request and the information in the headers is either incorrect or insufficient to process the request. It is entirely possible that if you had to run this call using a "run as" command in console that the application pool's identity user does not have the appropriate permission or username. You can attempt to change the identity user to this specific domain account to see if that alleviates the problem, but you may have to isolate this particular function into its own application pool in order to protect the rest of the website from having this specification.
I'm running a vsFTPd FTP server with virtual users (i.e. users are stored in Berkeley DB and do not exist at OS level). The users are authenticated via /etc/pam.d/ftp:
%PAM-1.0
auth required pam_userdb.so db=/etc/vsftpd/vsftpd-virtual-user
account required pam_userdb.so db=/etc/vsftpd/vsftpd-virtual-user
I want to implement an user-level IP filtering via tcp_wrappers, for ex.:
/etc/hosts.deny:
vsftpd: toto#10.10.10.10
(user 'toto') is a vitrual user.
However, toto can log in to the FTP server from 10.10.10.10:
Status: Connecting to 10.10.10.10:21...
Status: Connection established, waiting for welcome message...
Response: 220 "FTP server"
Command: USER toto
Response: 331 Please specify the password.
Command: PASS ********
Response: 230 Login successful.
Status: Connected
How to make vsftpd's virtual users working with tcp_wrappers? how to debug system calls to tcp_wrappers to ensure that vsftpd is passing a correct user name to tcp_wrappers?
TCP wrappers may sound promissory but won't work (long explanation) However you can achieve same level of granularity via PAM.
For instance you can locate the PAM's FTP conf file, if your vsFTPd was compiled with PAM support (ldd /usr/sbin/vsftpd | grep pam) and replace the account line to use pam access control instead.
# vi /etc/pam.d/vsftpd
account include password-auth (comment this line out)
# add the following line
account required pam_access.so
Then you can edit /etc/security/access.conf and create more complex rules to tailor your needs, i.e.
+ : restricted_username : 192.168.1.10
+ : ALL EXCEPT restricted_username : ALL
- : ALL : ALL
The above rule will allow the user 'restricted_username' to login only from that specific IP, while allowing the rest of the users log in from ALL other sources.