Thin client message "not enough space" when unpacking the cab - windows-ce

I am installing a .net (CF) app using a cab, on a thin client running WinCE 6.0. When I first install it, everything is fine, and the app gets installed in the specified location.
Just out of curiosity, I clicked on the same CAB again and was greeted with "Not enough space" message. None of the files were modified...so it doesn't make any sense at all....
Are there any settings in the CAB I should be using to avoid this?
I have been using CAB for 3 years now and haven't seen this type of a message yet. The message would make sense if files were changed and got bigger. But if no change happened, something is off.

This might help:
http://www.vmware.com/pdf/vsphere4/r41/vsp_41_vm_admin_guide.pdf
(Ctrl-F type in "full" or Disk)

Is the hard drive on the think client almost completely full? It sounds to me like it has just enough space to install it and then when you try to execute it again, it can't find enough free space on the hard drive.

I think the installer only checks the registry to detect a previous installation of the same program and it does not check if the files from the previous installation are still present or not. If they were deleted, or the file system is not persistent, then the new installation process does not have anything to overwrite.
On top of that, even if the files are present, the installation will also has to make sure that the file sizes are the same (they could be zero due to some file system corruption for example). Still I might be forgetting some other edge situations.
I suppose for performance and consistency reasons it is easier just to ask for more free space.

Related

MS Access 2016 program pops Stack errors and Security Warnings on non-developer PCs

I read all the rules on asking good questions here, I hope this will suffice.
I am having problems with an Access 2016 .ACCDE database.
The program runs fine on my machine. When I try to run it on my friends' machines (either the .ACCDE or .ACCDB version) it won't load and pops Out Of Stack Space errors and the Security Notice instead.
So, here's the set up:
The program was written in Access 2016. It is a Front End/Back End design. It's not a very big program 16 tables, 41 forms and 51 code modules.
I use the FMS Access Analyzer to help make sure my code is clean so the quality of the program is good to very good.
PRIOR versions of the program ran fine on all machines. I made several changes, improvements and moved it to the \Documents folder. Now we are having problems.
Machine 'A' (Development PC): New Win 10, 8GB RAM, Full MS Access (not runtime).
Machine 'B': Newish laptop 2GB RAM, lots of disk, Access 2016 Runtime. It ran prior versions of the program fine but now is blowing errors.
Machine 'C': Newish desktop 8GB RAM lots of free disk, full Access (not runtime). It also ran prior versions of the program fine but now is blowing errors.
Initally, the opening form would pop an error that the On Load event caused an Out Of Stack Space event. User says,
"Still happens after a fresh reboot. It does NOT happen with other .accde files." Both A and B machines are showing the same errors.
I made many changes but could not cure the Out Of Stack Space error. Finally, I went to an Autoexec Macro instead of a startup form. The autoexec macro that caused Error 3709 and aborted the macro. Machine B had CPU 49%, Mem 60%. The micro sd drive had 5.79GB used and 113GB free.
I deleted the macro. Went back to startup Form, still no luck.
I asked if he got a MS Security error, he said, "Yes, Microsoft Access Security Notice. Figuring just a general warning since it let's me go ahead and open the file. The directory where we have the program (C:Documents\Condor) was already a Trusted Location on my work machine."
So, does this sound like a Security error?
Is it a problem to have the program in the \Documents folder?
okay well there's a lot going on in this post - so to sanity check I would suggest getting back to basics: working just with .accdb and full license - - does it throw any errors at all?
an aside: because with runtime an error = crash....usually it just rolls over and closes without any message.
an aside: you don't need .accde for run time as it can't affect design, only if there are full license people you want to keep from going into design view would you need accde.
you have to be sure that the runtime / accde machines have the exact same path to the back end as your full license machine's path - as the path is stored in the front end
but sanity checking the accdb on the full license machine is the first step in debugging this... if this is not all okay then must be dealt with first.
I'm sorry, I thought I had posted that the problem was resolved.The table links broke because, as you pointed out, one person's This PC\Documents\whatever folder is different from anyone else's. (C:\Users\KentH\Documents\whatever vs. C:\Users\JohnT\Documents\whatever)
Thank you for your time and suggestions. Broken table links can cause the stack error, fer sure, and that can be caused by trying to put programs someplace other than the C:\Programs folder.
D'oh!

Changing the order in which directories are searched for programs in Linux

I recently was in a situation when the software center on my Ubuntu installation was not starting. When I tried to launch it from console, I found that python was unable to find Gtk, although i hadn't removed it.
from gi.repository import Gtk,Gobject
ImportError: cannot import name Gtk
I came across a closely related question at Stackoverflow( i am unable to provide link to the question as of now).The solution accepted(and also worked for me) was to remove the duplicate installation of gtk from /usr/local as Gobject was present in this directory but not Gtk.
So, I removed it and again launched software-center and it launched.
While I am happy that the problem is solved, I would like to know if removing files from /usr/local can cause severe problems.
Also, echo $PATH onn my console gives:
/home/rahul/.local/bin:/home/rahul/.local/bin:/home/rahul/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/home/rahul/.local/bin:/home/rahul/.local/bin
which tells that /usr/local/bin is searched for before /usr/bin . Should $PATH be modified such that the order of lookup is reversed? If yes, then how?
You actually don't want to change that.
/usr/local is a path that, according to the Linux Filesystem Hierarchy Standard is dedicated to data specific to this host. Let's go a bit back in time: when computers were expensive, to use one you had to go to some lab, where many identical UNIX(-like) workstations were found. Since disk space was also expensive, and the machines were all identical and had more or less the same purpose (think of a university), they had /usr mounted from a remote file server (most likely through the NFS protocol), which was the only machine with big enough disks, holding all the applications that could be used from the workstation. This also allowed for ease of administration, where adding a new application or upgrading another to a newer version could just be done once on the file server, and all machines would "see" the change instantly. This is why this scheme permaned even as bigger disks become inexpensive.
Now imagine that, for any reason, a single workstation needed a different version of an application, or maybe a new application was bought with with only a few licenses and could thus be run only on selected machines: how to handle this situation? This is why /usr/local was born, so that single machines could somehow override network-wide data with local data. For this to work, of course /usr/local must point to a local partition, and things in said path must come first than things in /usr in all search paths.
Nowadays, UNIX-like Linux machines are very often stand-alone machines, so you might think this scheme no longer makes senses, but you would be wrong: modern Linux distributions have package management systems which, more or less, play the role that of the above-mentioned central file server: what if you need a different version of an application that what is available in the Ubuntu repository? You could install it manually, but if you put it in /usr, it could be overwritten by an update performed by the package management system. So you could just put it in /usr/local, as this path is usually granted not to be altered in any way by the package management system. Again, it should be clear that in this case you want anything in /usr/local to be found first than anything in /usr.
Hope you get the idea :).

How to determine that the shell script is safe

I downloaded this shell script from this site.
It's suspiciously large for a bash script. So I opened it with text editor and noticed
that behind the code there is a lot of non-sense characters.
I'm afraid of giving the script execution right with chmod +x jd.sh. Can you advise me how to recognize if it's safe or how to set it's limited rights in the system?
thank you
The "non-sense characters" indicate binary files that are included directly into the SH file. The script will use the file itself as a file archive and copy/extract files as needed. That's nothing unusual for an SH installer. (edit: for example, makeself)
As with other software, it's virtually impossible to decide wether or not running the script is "safe".
Don't run it! That site is blocked where I work, because it's known to serve malware.
Now, as to verifying code, it's not really possible without isolating it completely (technically difficult, but a VM might serve if it has no known vulnerabilities) and running it to observe what it actually does. A healthy dose of mistrust is always useful when using third-party software, but of course nobody has time to verify all the software they run, or even a tiny fraction of it. It would take thousands (more likely millions) of work years, and would find enough bugs to keep developers busy for another thousand years. The best you can usually do is run only software which has been created or at least recommended by someone you trust at least somewhat. Trust has to be determined according to your own criteria, but here are some which would count in the software's favor for me:
Part of a major operating system/distribution. That means some larger organization has decided to trust it.
Source code is publicly available. At least any malware caused by company policy (see Sony CD debacle) would have a bigger chance of being discovered.
Source code is distributed on an appropriate platform. Sites like GitHub enable you to gauge the popularity of software and keep track of what's happening to it, while a random web site without any commenting features, version control, or bug database is an awful place to keep useful code.
While the source of the script does not seem trustworthy (IP address?), this might still be legit. With shell scripts it is possible to append binary content at the end and thus build a type of installer. Years ago, Sun would ship the JDK for Solaris in exactly that form. I don't know if that's still the case, though.
If you wanna test it without risk, I'd install a Linux in a VirtualBox (free virtual-machine software), run the script there and see what it does.
Addendum on see what it does: There's a variety of tools on UNIX that you can use to analyze a binary program, like strace, ptrace, ltrace. What might also be interesting is running the script using chroot. That way you can easily find all files that are installed.
But at the end of the day this will probably yield more binary files which are not easy to examine (as probably any developer of anti-virus software will tell you). Therefore, if you don't trust the source at all, don't run it. Or if you must run it, do it in a VM where at least it won't be able to do too much damage or access any of your data.

Commits are extremely slow/hang after a few uploads

I've recently started to notice really annoying problems with VisualSVN(+server) and/or TortoiseSVN. The problem is occurring on multiple (2) machines. Both running Windows 7 x64
The VisualSVN-server is running Windows XP SP3.
What happens is that after say, 1 2 or 3 (or a bit more, but almost always at the same file) the commit just hangs on transferring data. With a speed of 0bytes/sec.
I can't find any error logs on the Server. I also just asked for a 45day trial of Enterprise Server for its logging capabilities but no errors there as well.
Accessing the repository disk itself is fast, I can search/copy/paste to that disk/SVN repo disk just fine.
The Visual SVN Server also does not use excessive amounts of memory nor CPU usage, which stays around 0-3%.
Both the Server as well as TortoiseSVN's memory footprint moves/changes which would indicate at least "something" is happening.
Committing with Eclipse (different project (PHP), different repository on the server) is going great. No slow downs, almost instant commits, with 1 file or 50files. The Eclipse plugin that I use is Subclipse.
I am currently quite stuck on this problem and it is prohibiting us from working with SVN right now.
[edit 2011-09-08 1557]
I've noticed that it goes extremely slow at 'large' files, for instance a 1700MB .resx (binary) or 77KB .h source (text) file. 'small' files > 10KB go almost instantly.
[edit 2011-09-08 1608]
I've just added the code to code.google.com to see if the problem is on my end or the server end. Adding to google code goes just fine, no hangs at all. 2,17MB transferred in 2mins and 37secs.
I've found and fixed the problem. It appeared to have been a faulty NIC, speedtest.net resulted in ~1mbit, shoving in a different NIC pushed this to the max of 60mbit and solving my commit problems.

Linux: Remove application settings after program is uninstalled?

I'm writing a program for Linux that stores its data and settings in the home directory (e.g. /home/username/.program-name/stuff.xml). The data can take up 100 MB and more.
I've always wondered what should happen with the data and the settings when the system admin removes the program. Should I then delete these files from every (!) home directory, or should I just leave them alone? Leaving hundreds of MB in the home directories seems quite wasteful...
I don't think you should remove user data, since the program could be installed again in future, or since the user could choose to move his data on another machine, where the program is installed.
Anyway this kind of stuff is usually handled by some removal script (it can be make uninstall, more often it's an unsinstallation script ran by your package manager). Different distributors have got different policies. Some package managers have got an option to specify whether to remove logs, configuration stuff (from /etc) and so on. None touches files in user homes, as far as I know.
What happens if the home directories are shared between multiple workstations (ie. NFS mounted)? If you remove the program from one of those workstations and then go blasting the files out of every home directory, you'll probably really annoy the people who are still using the program on other workstations.

Resources