google_accounts_daemon[1140]: Full path required for exclude: net[4026532634] - linux

I issued the command journalctl -xe on my VM Centos7 in Google Cloud and I got this error:
google_accounts_daemon[1140]: Full path required for exclude: net[4026532634]
Does anyone have an idea on this?

Response to the bug report I filed on Google Cloud Platform:
The accounts daemon is activated every ~90 seconds to ensure that expired SSH keys are removed from the guest. At that time, the guest will attempt to create/update/remove users based on the non-expired keys in metadata. If some user isn't getting set up properly (maybe there is a problem adding the user to one of the listed groups?) it might result in something getting called every couple minutes.
Google Cloud Platform Issue #444
I did not take more time to debug the issue, since the only problem is the volume of log messages.
The log messages stopped when I upgraded the instances to CentOS 7.4.

Yes, I am running a docker inside my VM. ^_^

Related

azcopy from Google Cloud to Azure just hangs

I am trying to use azcopy to copy from Google Cloud to Azure.
I'm following instructions here and I can see in the logs generated that the connectivity to GCP seems fine, the SAS token is fine and it creates the container fine (see it appear in Azure Storage Explorer) but then it just hangs. Output is:
INFO: Scanning...
INFO: Authenticating to source using GoogleAppCredentials
INFO: Any empty folders will not be processed, because source and/or destination doesn't have full folder support
If I look at the log it shows:
2022/06/01 07:43:25 AzcopyVersion 10.15.0
2022/06/01 07:43:25 OS-Environment windows
2022/06/01 07:43:25 OS-Architecture amd64
2022/06/01 07:43:25 Log times are in UTC. Local time is 1 Jun 2022 08:43:25
2022/06/01 07:43:25 ISO 8601 START TIME: to copy files that changed before or after this job started, use the parameter --include-before=2022-06-01T07:43:20Z or --include-after=2022-06-01T07:43:20Z
2022/06/01 07:43:25 Authenticating to source using GoogleAppCredentials
2022/06/01 07:43:26 Any empty folders will not be processed, because source and/or destination doesn't have full folder support
As I say, no errors around SAS token being out of date, or can't find the GCP credentials, or anything like that.
It just hangs.
It does this if I try and copy a single named file or a recursive directory copy. Anything.
Any ideas, please?
• I would suggest you to please check the logs of these AzCopy transactions for more details on this scenario. To collect the logs and analyze them, you will have to check the logs stored in ‘%USERPROFILE%\.azcopy’ directory on Windows. AzCopy creates log and plan files for every job, so you will have to investigate and troubleshoot any potential problems regarding this scenario by analyzing them.
• As you are encountering hang issues with the AzCopy utility during a job execution for transferring files, it might be a network fluctuation issue, timeout issue or server busy issues. Please do remember that AzCopy retries upto 20 times in these cases and usually the retry process succeeds. Try to look for the errors in the logs that are near ‘UPLOADFAILED, COPYFAILED, or DOWNLOADFAILED’.
• The following command will get all the errors with ‘UPLOADFAILED’ status from the concerned log file: -
Select-String UPLOADFAILED .\<CONCERNEDLOGFILE GUID>.log
To show the jobs by status relating to the job ID, kindly execute the below command: -
azcopy jobs show <job-id> --with-status=Failed
• Execute the AzCopy job execution command from your local system with ‘--no-check-certificate’ argument which will ensure that there are no certificate checks for the system certificates at the receiving end. Ensure that the root certificates for the network client device or software are correctly installed on your local system as they are the only ones to block your jobs while transferring files from on-premises to Azure.
Also, once the job starts initially without any parameters, then when it hangs, just press CTRL+C to kill the process and then immediately check the logs in AzCopy as well as in the event viewer for any system issues. It will help you know the exact issue regarding this. It really shows why the process failed and got hung.
For more information, kindly refer to the documentation link below: -
https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-configure
https://github.com/Azure/azure-storage-azcopy/issues/517
Frustratingly, having many calls with Microsoft support, on demoing this to another person the exact same command with exact same SAS token etc that was previously failing just started to work.
I hate problems that 'fix themselves' as it means it will likely occur again.
Thanks to KartikBhiwapurkar-MT for a detailed response too.

fusioninventory, ITSM

to begin with, I have an internship with a trading company that has 15 points of sale. My mission is to manage its IT infrastructure with ITSM 9.1.6.
To discover its network, I use Fusioninventory 9.1+1.0.
I also have installed the latest Fusioninventory agent for Windows. My problem is that I only get response from only 10 points of sale. I didn't get my problem to solve it because the agent works well (for the 15 points of sale).
PS: it's not a problem of a firewall, I installed the netdiscovery, deploy, ESX and the other fusioninventory features
I don't have a good knowledge about ITSM neither about the Fusioninventory but I followed tutorials.
Soryy if I asked dummy questions but could any one help me please ?
You should check the logs to start debugging.
Enable logging by seting debug to 1 or 2. Open regedit and head to HKLM/Software/Fusioninventory/ find the debug key and edit it.
If you are running the agent as a service then restart it, otherwise just execute the fusioninventory-agent.bat script to launch a new inventory.
Check the logs. They are usually at %programfiles%\fusioninventory-agent\ although I think they are better at %programfiles%\fusioninventory-agent\var\. You can change where to store the log at the logfile key.
There you'll find what is stopping your inventory from reaching the server.

SSH fingerprint change after ubuntu update (Azure only)

After upgrading my Ubuntu 14.04 LTS machine hosted on Azure (previous update was two weeks ago on Feb. 22nd), it now warns me about changed server SSH key when I try to connect to it.
###########################################################
# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! #
###########################################################
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
I am ruling out the Ubuntu update triggering this change because this happened with my (only) Azure machine, but not the rest of about a dozen Linux servers that run either locally or on AWS with nearly identical configuration that were updated at the same time. I have also checked the host key algorithm as reported by ssh -v and it is unchanged (ECDSA-SHA2-NISTP256).
Is there anything specific about the way Azure handles SSH connections, or something particular about the Ubuntu image provided by Azure that could have led to the change in the server key?
P.S. I am downloading the VHD to check the machine locally, but this will take at least 24 hours with my connection. I was just wondering, maybe somebody has run into the same issue before.
It turns out that the keys were regenerated by cloud-init. As far as I can tell it was due to this bug: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1551419
I would like to be able to provide a less painful solution than downloading the VHD and checking the server fingerprint, but unfortunately the Azure portal still displays the fingerprint for the original key that was created when the instance was first provisioned.

Role Instances are taking longer than expected - Workaround issues

Whenever we get the error "Role Instances are taking longer than expected". The only possible options to do are .
Shutdown the emulators and try again.
Restart the machine and see if that helps.
Uninstall the Azure Tools for that version.
Some times uninstalling the same takes a long time,some times even days. It appears that some process or service is blocking the same. Has anyone faced this before ? If yes does anyone know which process would be blocking the same?
When an instance starts it will run the OnStart method on the worker/web role (depending on your service type). The more stuff you have in there, the more time it will take to start up the role. Common caveats are the Cache as mentioned and blob/table storage (if you do read/write/create when you start the role).
Try minimizing the OnStart's workload and moving any storage stuff in async tasks.
I have had similar problems as well in the past
IISConfigurator could not map the web roles in IIS. In my case it was due to corrupted file system ACLs on the code directory. See logs under C:\Users\YOUR_USER_NAME\AppData\Local\dftmp\IISConfiguratorLogs\
Another cause might be that something else has tied up the Port Numbers that Azure is trying to bind your web role on. Or that the ports that the local storage needs for tables/blobs and queues (10000-10002) have been taken by another app. Open a command prompt and run netstat -anb
Try running the Visual Studio using "Run as Administrator" option.

linux gedit: I always get "GConf Error: failed to contact configuration server ..."

How come I always get
"GConf Error: Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash. See http://projects.gnome.org/gconf/ for information. (Details - 1: Failed to get connection to session: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.)"
when I start 'gedit' from a shell from my superuser account?
I've been using GUI apps as a logged-in user and as a secondary user for 15+ years on various UNIX machines. There's plenty of good reasons to do so (remote shell, testing of configuration files, running multiple sessions of programs that only allow one instance per user, etc).
There's a bug at launchpad that explains how to eliminate this message by setting the following environment variable.
export DBUS_SESSION_BUS_ADDRESS=""
The technical answer is that gedit is a Gtk+/Gnome program, and expects to find a current gconf session for its configuration. But running it as a separate user who isn't logged in on the desktop, you don't find it. So it spits out a warning, telling you. The failure should be benign though, and the editor will still run.
The real answer is: don't do that. You don't want to be running GUI apps as anything but the logged-in user, in general. And you never want to be running any GUI app as root, ever.
For some (RHEL, CentOS) you may need to install the dbus-x11 package ...
sudo yum install dbus-x11
Additional details here.
Setting and exporting DBUS_SESSION_BUS_ADDRESS to "" fixed the problem for me. I only had to do this once and the problem was permanently solved. However, if you have a problem with your umask setting, as I did, then the GUI applications you are trying to run may not be able to properly create the directories and files they need to function correctly.
I suggest creating (or, have created) a new user account solely for test purposes. Then you can see if you still have the problem when logged in to the new user account.
I ran into this issue myself on several different servers. It I tried all of the suggestions listed here: made sure ~/.dbus had proper ownership, service messagbus restart, etc.
I turns out that my ~/.dbus was mode 755 and the problem went away when I changed the mode to 700. I found this when comparing known working servers with servers showing this error.
I understand there are several different answers to this problem, as I have been trying to solve this for 3 days.
The one that worked for me was to
rm -r .gconf
rm -r .gconfd
in my home directory. Hope this helps somebody.

Resources