I have a Sage notebook server that runs in a screen session on Ubuntu Server 14.04 (32-bit). When I'm ssh'd to the machine, I can use my notebook in my browser as expected. If I'm not ssh'd to the machine (but notebook server still running in screen session), I can still log in and open my notebook, but when I press SHIFT+ENTER in a compute cell, I get:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "_sage_input_3.py", line 8, in <module>
_interact_.SAGE_CELL_ID=1
NameError: name '_interact_' is not defined
If I then go ssh back into the machine, I close and reopen the notebook (log out from the server is not necessary), and I can use compute cells normally again. I don't even have to be attached to the screen session, just logged in to the host.
I thought the most likely culprit would be related to the eCryptfs of my home dir, so I created /var/sage/sage_notebook.sagenb, but I still get the error*. Currently, the permissions are at 750, but I also tried 777 without success.
The issue is clearly something that's missing when I'm not logged in, but I can't figure out what. The server is a pretty vanilla, ext4 install. Does anyone know what I'm missing?
*Actually, I was getting permission denied errors when the notebook dir was in my home dir and I wasn't logged in. The error shown is what I'm seeing now that I've moved to /var/sage/...
I got the answer from a sibling post I made at Unix SE.
I had moved the notebook dir out of my home dir, but Sage was still accessing its config in ~/.sage. export HOME=/var/sage worked; I'll probably create a separate user to run the server.
Related
I'm trying to execute a .bat File on a Server in a local network with psexec
I'm currently trying with this command:
.\PsExec.exe -i -u Administrator \\192.168.4.36 -s -d cmd.exe -c "Z:\NX_SystemSetup\test.bat"
The server has no password (it has no internet connection and is running a clean install of Windows Server 2016), so I'm currently not entering one, and when a password is asked I simply press enter, which seems to work. Also, the .bat File currently only opens notepad on execution.
When I enter this command, I get the message "The file cannot be acessed by the system"
I've tried executing it with powershell with administrator privileges (and also without, since I saw another user on Stackoverflow mention that it only worked for them that way) but to no success.
I'm guessing this is a privilege problem, since it "can't be accessed", which would indicate to me that the file was indeed found.
I used net share in a cmd and it says that C:\ on my server is shared.
The file I'm trying to copy is also not in any kind of restricted folder.
Any ideas what else I could try?
EDIT:
I have done a lot more troubleshooting.
On the Server, I went into the firewall settings and opened TCP Port 135 and 445 explicitly, since according to google, PsExec uses these.
Also on the Server, I opened Properties of the "windows" Folder in C: and added an admin$ share, where I gave everyone all rights to the folder (stupid ik but I'm desperate for this to work)
Also played around a bunch more with different commands. Not even .\PsExec.exe \\192.168.4.36 ipconfig seems to work. I still get the same error. "The file cannot be accessed by the system"
This is honestly maddening. There is no known documentation of this error on the internet. Searching explicitly for "File cannot be accessed" still only brings up results for "File cannot be found" and similar.
I'm surely just missing something obvious. Right?
EDIT 2
I also tried adding the domain name in front of the username. I checked the domain by using set user in cmd on the server.
.\PsExec.exe \\192.168.4.16 -u DomainName\Administrator -p ~ -c "C:\Users\UserName\Documents\Mellanox Update.bat"
-p ~
seems to work for the password, so I added that.
I also tried creating a shortcut of the .bat File, and executing it as Administrator, using it instead of the original .bat File. The error stays the same "The File cannot be accessed by the system"
As additional info, the PC I'm trying to send the command from has Windows 10, the Server is running Windows Server 2016
So, the reason for this specific error is as simple and as stupid as it gets.
Turns out I was using the wrong IP. The IP I was using is an IPMI Address, which does not allow for any traffic (other than IPMI related stuff)
I have not yet gotten it to work yet, since I've run into some different errors, but the original question/problem has been resolved.
I prompted a failed su attemp in order to observe the log.
However, I couldn't find where su writes its logs.
My box is Kali 2019.
I commented out the SULOG section in my /etc/login.defs file
# If defined, all su activity is logged to this file.
#
SULOG_FILE /var/log/sulog
Despite having done that I still don't have sulog file in /var/log.
I created one manually and made the wrong attempt again but nothing.
I am missing something?
Thank you all in advance.
many times, login attempts or request for a new login shell are logged into os mailbox and/or on your system log.
It depend on your os default configs.
Try to check file:
/var/spool/mail/
or try:
journalctl -r
to see all your system log starting by newest
I have a file that I want to transfer to a remote machine that is running W7 32 bit
I have a script that enables me to push the file to the machine from a linux management server, using a combination of:
1) smbclient to mount the Admin share on the W7 machine
2) winexe to move the file to the location I require
This leaves me with the file in the correct location, but owned by the Admin user - whereas I need it to be editable by a standard user, User1
I have been trying to resolve this by using icacls
Using winexe I can run this remotely on the W7 machine. Initially I tried setting the poermissions to "Full" for the user account:
icacls c:......\myFile /grant User1:F
Checking this from the command line showed that it had apparently worked:
icacls c:......\myFile
c:......\myFile User1:(F)
However, from the windoes desktop, the file properties dialogue showed User1 having only read permissions, and anything else gave access denied.
My next attempt was:
icacls c:......\myFile /setowner User1
However, when logged in to the windows desktop as User1, attempting to delete or edit the file now tells me that doing so requires permission from User1....which is a bit peverse, since I am logged in as User1....
Any ideas?
This may or may not help, but I was unable to delete a file I copied from a Linux machine to a Windows shared folder - was getting a 'need Administrator permission' type error.
I was trying to solve this with the smbclient -c "setmode -r;" option, but when this didn't work I realised the Windows folder itself was set for read-only access for all but Administrator level.
I am trying to pull a file from another computer into R environment in RStudio on Centos 6
I've tried it in plain R first and when I issue
readLines(pipe( 'ssh root#X.X.X.X "cat /path/somefile.sh"' ))
it correctly asks me for the password of my ssh key and reads the contents.
However if the same command is executed from RStudio all I get is:
ssh_askpass: exec(rpostback-askpass): No such file or directory
Permission denied, please try again.
ssh_askpass: exec(rpostback-askpass): No such file or directory
Permission denied, please try again.
ssh_askpass: exec(rpostback-askpass): No such file or dire
Permission denied (publickey,gssapi-with-mic,password).
I suspect that the reason is because rstudio on centos actually uses rstudio-server user (and gui is provided in a browser). Does anyone know how to properly access ssh'd resources from it ?
UPD: after executing
Sys.setenv(PATH = paste0(Sys.getenv('PATH'), ':/usr/lib/rstudio-server/bin/postback'))
as suggested below it won't output askpass errors, but it still does not work. Now it seems that the console is waiting for the command to execute indefinitely
rpostback-askpass is part of RStudio. It may help to add its location (/usr/lib/rstudio-server/bin/postback on my system) to PATH so that ssh can find it:
Sys.setenv(PATH = paste0(Sys.getenv('PATH'), ':/usr/lib/rstudio-server/bin/postback'))
UPDATE RCurl has scp function for copying files over ssh connection. See this answer for details. If you are running your scripts with RStudio, you can use its API to enter the ssh password interactively with hidden input:
pass <- .rs.askForPassword("password?")
and rstudioapi can help to determine whether the script is launched by RStudio or not.
The fix for this is in the bottom part of the post, in the last block with the grey background.
I'm trying to get my Raspberry Pi - which is running the stock version of Debian to be a remote repository for Mercurial. I have set up local repositories on my desktop and laptop (running Mageia) and they work fine locally. I want to be able to push and pull any changes to the Pi. I've set-up OpenVPN on the Pi, so I can access it and, hopefully, push and pull my software from anywhere in the world.
So, I have followed these instructions:
Step-by-step (using Apache2 as my web server) and when I try to connect as in step 9.1.2 with this:
Check if it works by directing your browser to yourhost/hg/.
By putting pi/hg into Firefox, I get an internal server error. (Just putting pi into Firefox gives me the default Apache message and all is good.)
My Apache error log shows me this:
Traceback (most recent call last):
File "/var/hg/hgwebdir.cgi", line 18, in <module>
application = hgweb(config)
File "/usr/lib/python2.7/dist-packages/mercurial/hgweb/__init__.py", line 27, in hgweb
return hgweb_mod.hgweb(config, name=name, baseui=baseui)
File "/usr/lib/python2.7/dist-packages/mercurial/hgweb/hgweb_mod.py", line 34, in __init__
self.repo = hg.repository(u, repo)
File "/usr/lib/python2.7/dist-packages/mercurial/hg.py", line 93, in repository
repo = _peerlookup(path).instance(ui, path, create)
File "/usr/lib/python2.7/dist-packages/mercurial/localrepo.py", line 2350, in instance
return localrepository(ui, util.urllocalpath(path), create)
File "/usr/lib/python2.7/dist-packages/mercurial/localrepo.py", line 79, in __init__
raise error.RepoError(_("repository %s not found") % path)
mercurial.error.RepoError: repository /var/hg/repos not found
[Wed Jan 22 17:23:26 2014] [error] [client 10.8.0.6] Premature end of script headers: hgwebdir.cgi
If I try to connect from Mercurial with remote (http) repository as pi/ I get this in my Apache logs:
[error] [client 10.8.0.6] File does not exist: /var/www/.hg
In my Tortoise HG logs on the local machine I get this:
[command returned code 255 Wed Jan 22 17:24:49 2014]
% hg --repository /path/sqlforms outgoing --quiet --template {node}URL goes here ' does not appear to be an hg repository:
---%<--- (text/html)enter code here
<html><body>
<p>This is the default web page for this server.</p>
<p>The web server software is running but no content has been added, yet. Rob has changed summat</p>
If I use pi/hg as the remote server, Tortoise HG gives me this:
[command returned code 255 Wed Jan 22 17:25:15 2014]
% hg --repository /path/sqlforms outgoing --quiet --template {node}^M http://pi/hg/
HTTP Error: 500 (Internal Server Error)
[command returned code 255 Wed Jan 22 17:25:24 2014]
sqlforms%
/var/hg/repos does exist as a directory.
Hopefully I've given the right amount of info there. I'm no Linux newb, but I am to Apache and fairly new to Mercurial, so I'm probably doing something stupid. AFAIK I have faithfully copied the steps on the web site link I showed above. Is that enough information to troubleshoot? If not, I can supply anything else as needed. Many thanks.
It ended up being a number of different things - some of which Mata mentioned, so I'm putting his/her answer down as he/she was kind enough to point me in the right direction. I'm putting some more details below in case it helps someone else because in all documentation I've found some points aren't well made.
It's important to note that the /var/hg directory that you specify ends up being accessed as server_name/hg when accessing via http. So, if you put a directory on your server in path:
/var/hg/dex
Then this is accessed via http as:
http://serve_name/hg/dex
So, in this case the http access you are "mapping" /var/hg/dex as hg/dex
I think what is super-confusing about the documentation is the way hg is used too much and it would be better described if the base directory structure on your server is something like this:
/var/mercurial_repositories
You would obviously have to set up the Apache config file to point there as its base directory rather than at:
/var/hg.
This would make it far more obvious that you are mapping /var/mercurial_repositories to hg when it comes to remote access. The way it is described it is far too ambiguous with hg being used in too many different places. Whereas this might be obvious to experienced users or someone who's had someone sit down and explain it to them, to a newb, it's very confusing.
Then, the other thing that is not obvious in the documentation is that:
/var/hg/repos
is not a directory for ALL repositories. This is a directory for one repository. I struggled with this for quite some time. Again, the documentation is very misleading for a newb. If it said:
/var/hg/repo (singular) it might be a lot better.
I realise later, tucked away somewhere in one of the pages of documentation is mentions you need subdirectories within repos, but, again, it is very confusing for someone starting out the way this is worded. Something like:
/var/mercurial/repositories_base_directory
Would be far clearer.
Also, for every directory you set up in your base directory, you have to have a new entry in the file:
/var/hg/hgweb.config
This is done like this:
[paths]
c82xx = /var/hg/repos/c82xx
The documentation on this is especially terrible the way it just says:
repos/ = repos/
The issue with these path settings, which, again is explained nowhere (as far as I could see), is that on the left hand side of the equals sign is how your remote machine accesses the directory where your repository is as a subdirectory of:
http:://server/hg
The right hand side is the absolute path on the server. It means you can type a relatively small path while remotely pushing and pulling. This instance:
http:://server/hg/c82xx
Next up, as Mata pointed out, you need to do hg init in the directory on the server, then from the local machine, push whatever you have already got to the server. So, in the directory with .hg on your local machine (in this case my c82xx project:
hg push http:://server/hg/c82xx
There are two more vital thing to note though before you can do this:
1. You need to create, within the .hg directory on the SERVER a file called hgrc and put this in it:
[web]
push_ssl = false
allow_push = *
Now, from what I understand, you should ONLY do this on a trusted network. For me, I'm on a VPN or my LAN, so it's fine.
2. That hgrc file AND all the repository directory and subdirectories have to have permissions to allow writing to these directories and folders.
That should do it. Phew!
I swear, version control is more complex than writing the software in the first place! :D
Specifying a directory as config only works if it's a repository. You're pointing it to a directory which doesn't seem to be a repository.
Maybe you're trying to serve multiple repositories from within that directory? In that case you need to set config to point to a config file where you can then specify the repositories you want to include.
Have a look here, it should describe everything you need.