Gnome online accounts - gnome-terminal

I successfully installed gnome-online-accounts on my PC, equipped with Debian 9 OS. Everything works fine if I work from the X-terminal, having logged on with the default user. The command:
gio list google-drive://XXXXXXXXXXX#gmail.com/
gives the expected results.
But it doesn't if the same command is given thru crontab, though from the same default user. Here is the message:
gio: google-drive://XXXXXXXXXXX#gmail.com/: Operation not supported
If the problem was caused by an unmounted file system, due to lost of connectivity, the message should be:
gio: google-drive://XXXXXXXXXXX#gmail.com/: The specified location is not mounted
It seems like the command was given by another user.
Anyone has an idea about where is the trick?

As hinted at the end of this page, inside the bash script executed by crontab, before the gio call, it should be added:
declare -x DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/XXXX/bus
The XXXX value must be replaced by the UID value of the user enabled to the goa connection. This value is often "1000".

Related

How to differentiate a linux command execution from command line and a program

I have a unix command 'abc' which gives me an output
This abc lies on my server.
But when i run this command from server, i want to restrict the output of it to be seen by people.
By the above statement , i meant..
For eg. If i say:
ls dirname
I can see the output of the above command on the console.
So, if the command is run from command-line, i dont want to have echoed on the console. I cant use /dev/null as I am using the same command from my program where I need the output to be assigned to a variable and then use it further in my application.
However, I want to get the output of this command when I call this from my program.
How can I differentiate the call in this regard.
The command whoami gives you the current logged user, and the command last -i outputs information of the last logged users in the system, including the IP address (3rd column) and the timestamp or a string stating that the user is stil logged in.
With that in mind you could pipe these commands:
last -i | grep $(whoami) | grep 'still logged in'
which will provide an output like this:
(username) pts/2 0.0.0.0 Wed Dec 23 18:58 still logged in
(username) :0 0.0.0.0 Wed Dec 23 11:13 still logged in
so if you are running a shell in the same host, the IP will be 0.0.0.0 and different otherwise. You can extract the IP string by piping awk at the end of the command.
However, addhering to the philosophy in unix systems of Do One Thing and Do It Well, I'd suggest a different approach, split your command into 2 different commands:
A command to be used by the clients, where the output is whatever you
want the clients to see
Another command (offering 2 options, since there isnt much detail in the question):
Either extending the first command, adding the additional output, and using this one from your application
Or just generating the additional output, and using a combination of the 2 commands from your application
Some of the benefits you can get by following this approach:
Performing checks to verify where the command was issued from, is no longer necessary
Avoid coupling issues
Easier to maintain
Updated: added the means to extract the IP of the current user at the beggining of the answer.
You were a little vague on the complete setup, so I'll have to infer a few things. Since you mentioned, "my" server, I assume you can set permissions on files, change ownership on files, etc (e.g. you can become root).
I also have to infer that the target abc program just produces some output and doesn't need to modify any files to speak of [other than (e.g.) /tmp/temp.$$]
As an example, let's do this from your home directory. Move the program abc to $HOME/private_bin and set the directory permission to 700 which means that only you can execute it.
Create a second directory: $HOME/public_bin that has normal permissions. Create a "launcher" program [let's call it abcpub] and put it in this directory. Set the permissions of abcpub to 4741. It's now a setuid program. Note that any non-root user may do this for files they own. It is not like creating a sudo because an ordinary user would need to do chown root ...
Now we're set ...
You can access the real abc program anytime you want. Others have no direct access to abc.
The launcher abcpub will allow others to have access to abc, but the launcher can apply whatever restrictions you desire: including no access, output to /dev/null, etc. abcpub can look at getuid and geteuid to determine who is executing it [you or somebody else]
We did the above example using your own uid and home directory. But, we can repeat the process by creating an "abc" user in /etc/passwd and a /home/abc. The abc user could be set up with a shell of /sbin/nologin. Thus, it's similar to nobody and it can't hurt anything.
It may be even better doing this by creating a setgrp program instead of setuid as that allows better comingling. The original user could retain their user permissions but still get access via the new group.
Also, it may be possible to configure sudo to get what you want.

How to request NASMT Q700 QNAP linux hard disk smart states using the ssh interface?

I use a NASMT Q700 QNAP NAS. For remote monitoring purposes i want to read some values and save them into a database.
Since the web-interface is very complex and full of javascript, i can not scrape it. So I tried to connect to the NAS with SSH.
Which is great, because SSH is one of the methods, that i can connect with automatically with c# and I get back text that I can parse.
The installed Linux system on the box is a :
Linux NASMT 2.6.33.2 #1 Fri Mar 7 11:55:22 CST 2014 armv5tel unknown
I tried to reach my goal:
man is not installed.
smartctl is not installed. (Google told me to try this out)
I went into the /bin and /usr/bin directories and tried everything suspecious. There seems to be a program called nasutil installed. Only that it is not very self documenting. Various calls with different parameters did not work, i always get the same answer:
nasutil multi-call binary
[function] [arguments]...
Current defined functions:
init_nas_cache, init_admin_group, set_file_owner, chk_flash, reset_all, chk10198, get_trusted_domain, update_krb5_ticket
rescan_hd, check_e2key, burn_e2key, cnt_phy_nic, http_link, ip_filter, hdusb_copy, ims, qpkg, gen_upnp_desc, scanafpdb
eset_system, umount_all_vdd, sss_convert, httpd_init, get_hwsn, get_suid, setsum, getsum, rsyslog_util, radius_util, send_alert_mail, rsync_util
acl_cmd check_ldap clean_reset_pwd network_boot_rescan
I used google on this one but could not find anything useful.
I am looking for a command on this linux system without smartctl to give me a list of the installed hard drives with their SMART status.
Has anyone an idea?
Thank you very much in advance!
actually, I was able to find the answer using email and contacts at Fujitsu.
The answer was simple as can be:
# get_hd_smartinfo -d 1
1 is disk 1. Replace with 2 if want to check disk 2.
I did not test it yet, as soon as I have, i'll accept the answer for everyone to see.

hacking whoami to return a fake username

I've created a new whoami command which requires a fake username and have put it in the PATH by adding it to ~/.profile . It is created in a way that whoami is called before actual the actual whoami from Linux.
The main reason to do this is because I am remote accessing a Hadoop cluster and want the copied files to be under the fake username.
This works fine when I call whoami in the shell and even calling $PATH shows the path to my created whoami before everything else. But for some reason, when Hadoop is called, it doesn't pick the created `whoami'.
Can someone help me with how to fix this?
thanks
Most applications do not use whoami to determine a user's username or group. For instance, in bash you can use the command id to find more detailed information about yourself or id [username] (such as id root) to find out more detailed information about other users. Groups can be found with groups as well. Also, different programming languages, such as C, have their own methods of determining user identities such as the getuid() command.
If you really "need" to go as far as faking your user account, you'll need to go down to OS level and create hooks into the kernel/API that handles those methods.
Is it possible that you simply chown the files after they are copied instead?
UPDATE:
It appears that some releases of Hadoop do actually use whoami (my own implementation w/ clustering does not).
In this event, the best (a term loosely used) suggestion would be to move the legitimate whoami executable and create a whoami shell script that goes in it's place. The custom script should validate the current user and if it's "hadoop", return whatever faked username you want - otherwise return valid output. Igor's answer would work in this case.
I suppose that hadoop uses other PATH variable then you have in your shell.
You can tune its PATH and add the directory with fake whoami to its beginning.
When it is impossible,
you can write a small wrapper for whoami (I'm not sure that it is a good idea but you can do this if you want) that will run original whoami except when the script is executed by hadoop:
#!/bin/sh
WHOAMI=/bin/whoami.orig
if [ "$($WHOAMI)" = hadoop ]
then
echo fake
else
exec $WHOAMI "$#"
fi

Ensuring the existence of a user on a Debian GNU/Linux system

I'm currently working on a Debian package for an in-house program. As part of this package, I need to create the user which most of the functionality of the program runs as. I am doing this in the postinst script. The postinst script can be run several times (on upgrade, for example), so it's important to ensure that I'm not going to attempt to create the user every time.
So, how can I ensure that the user is created only the first time that the script is run, without affecting it on later runs of the script?
Try:
[aiden#dev ~]$ id aiden
uid=500(aiden) gid=500(aiden) groups=500(aiden)
[aiden#dev ~]$ id foomonkey
id: foomonkey: No such user
[aiden#dev ~]$
The first $? is 0, the second is 1.
You do not need to know whether the user exists or not. adduser(8) will not return an error if the user already exists with the same parameters. From the man page:
EXIT VALUES
0 The user exists as specified. This can have 2 causes: The user
was created by adduser or the user was already present on the
system before adduser was invoked. Invoking adduser a second
time with the same parameters as before also returns 0.
as mentionned before you can use the 'id' command, if you like to get all the user in a system you can use :
getent passwd
which will list all the users on the system (even if they are on a remote database like ldap or nis and etc...)

What is the XDG_SESSION_COOKIE environment variable for?

I've been fighting with crontab recently because in Intrepid the gconftool uses a dbus backend, and that means that when used from crontab it doesn't work.
To make it work I have had to export the relevant environment variables when I log in so that it finds the dbus session address when the cron comes to run.
Out of curiosity I wondered what environment the cron could see and it turns out all I have is HOME, LOGNAME, PATH, SHELL, CWD and this new one on me, XDG_SESSION_COOKIE. This looks curious and several googlings have thrown up a number of bugs or other feature requests involving it but nothing that tells me what it does.
My instinct is that this variable can be used to find all the stuff that I've had to export to the file that I source before the cron job runs.
My questions, therefore, are a) can I? b) if so, how? and c) what (else) does it do?
Thanks all
This is very interesting. I found out it is the display manager setting a cookie. That one can be used to register processes to belong to a "session" which are managed by a daemon called ConsoleKit. That is to support fast user switching. My KDE4.2.1 system apparently supports it too.
Read this fedora wiki entry.
So this environment variable is like DBUS_SESSION_BUS_ADDRESS to give access to some entity (in the case of XDG_SESSION_COOKIE a login-session managed by ConsoleKit). For example having that environment variable in place, you can ask the manager for your current session:
$ dbus-send --print-reply --system --type=method_call \
--dest=org.freedesktop.ConsoleKit \
/org/freedesktop/ConsoleKit/Manager \
org.freedesktop.ConsoleKit.Manager.GetCurrentSession
method return sender=:1.1 -> dest=:1.34 reply_serial=2
object path "/org/freedesktop/ConsoleKit/Session1"
$
The Manager also supports querying for the session some process belongs to
$ [...].Manager.GetSessionForUnixProcess uint32:4494
method return sender=:1.1 -> dest=:1.42 reply_serial=2
object path "/org/freedesktop/ConsoleKit/Session1"
However, it does not list or somehow contain variables that is related to some cron job. However, documentation of dbus-launch says that libdbus will automatically find the right DBUS bus address. For example, files are stored in /home/js/.dbus/session-bus that contain the correct current dbus session addresses.

Resources