TLDR, I have a large file with no extension and toolsey just calls it "raw data", and it's too large for any application on my PC
Longer story:
My Gmail account that my parents made for me when I was a kid and I haven't used since high school alerted me that I was pushing my storage limit. So I checked it out and it turns out 3.2 of my 15GB storage is dedicated to this one file "installation1". Dated from 2006, 7 years before the account existed. I downloaded it and deleted it from my email and did everything in my power to open it. The browsers just redownloaded it from my PC to my PC instead of displaying, and most everything else refused because of the size. The only thing that could open it was VSCode, but that needed an allowance of 4GBs of RAM and warned that it was binary or unsupported text encoding, and when I went to display it anyway the program just crashed. Toolsey says "raw data (format not in libmagic database)". I have no idea what this is, and it's hilarious to me because it seems like something out of a creepypasta or Snowcrash, though I know there's some explanation.
My only guess is that it could be a non-Windows OS of some kind, since the account was created on a mac, but I really am not sure.
Related
I read all the rules on asking good questions here, I hope this will suffice.
I am having problems with an Access 2016 .ACCDE database.
The program runs fine on my machine. When I try to run it on my friends' machines (either the .ACCDE or .ACCDB version) it won't load and pops Out Of Stack Space errors and the Security Notice instead.
So, here's the set up:
The program was written in Access 2016. It is a Front End/Back End design. It's not a very big program 16 tables, 41 forms and 51 code modules.
I use the FMS Access Analyzer to help make sure my code is clean so the quality of the program is good to very good.
PRIOR versions of the program ran fine on all machines. I made several changes, improvements and moved it to the \Documents folder. Now we are having problems.
Machine 'A' (Development PC): New Win 10, 8GB RAM, Full MS Access (not runtime).
Machine 'B': Newish laptop 2GB RAM, lots of disk, Access 2016 Runtime. It ran prior versions of the program fine but now is blowing errors.
Machine 'C': Newish desktop 8GB RAM lots of free disk, full Access (not runtime). It also ran prior versions of the program fine but now is blowing errors.
Initally, the opening form would pop an error that the On Load event caused an Out Of Stack Space event. User says,
"Still happens after a fresh reboot. It does NOT happen with other .accde files." Both A and B machines are showing the same errors.
I made many changes but could not cure the Out Of Stack Space error. Finally, I went to an Autoexec Macro instead of a startup form. The autoexec macro that caused Error 3709 and aborted the macro. Machine B had CPU 49%, Mem 60%. The micro sd drive had 5.79GB used and 113GB free.
I deleted the macro. Went back to startup Form, still no luck.
I asked if he got a MS Security error, he said, "Yes, Microsoft Access Security Notice. Figuring just a general warning since it let's me go ahead and open the file. The directory where we have the program (C:Documents\Condor) was already a Trusted Location on my work machine."
So, does this sound like a Security error?
Is it a problem to have the program in the \Documents folder?
okay well there's a lot going on in this post - so to sanity check I would suggest getting back to basics: working just with .accdb and full license - - does it throw any errors at all?
an aside: because with runtime an error = crash....usually it just rolls over and closes without any message.
an aside: you don't need .accde for run time as it can't affect design, only if there are full license people you want to keep from going into design view would you need accde.
you have to be sure that the runtime / accde machines have the exact same path to the back end as your full license machine's path - as the path is stored in the front end
but sanity checking the accdb on the full license machine is the first step in debugging this... if this is not all okay then must be dealt with first.
I'm sorry, I thought I had posted that the problem was resolved.The table links broke because, as you pointed out, one person's This PC\Documents\whatever folder is different from anyone else's. (C:\Users\KentH\Documents\whatever vs. C:\Users\JohnT\Documents\whatever)
Thank you for your time and suggestions. Broken table links can cause the stack error, fer sure, and that can be caused by trying to put programs someplace other than the C:\Programs folder.
D'oh!
Hoping some SysAdmins can weigh in here, because I am most assuredly not one.
OS: Ubuntu Server 14.04
CMS: Expression Engine 2.9 (with extras, such as Expresso Store)
Server type: Virtual
Error: Segmentation fault (11)
Unable to load the webpage because the server sent no data. Error code: ERR_EMPTY_RESPONSE
We do not believe it is a code issue on the ExpressionEngine side of things, and my research indicates it is normally something awry on the server itself or externally (browser, ISP, etc). Issue is, no matter where in the country one accesses this particular page on the site it will routinely fail, specifically in Chrome.
The client cannot launch the site in its present state so we have been scrambling to find an issue.
While playing detective certain facts became known to me.
The virtual server is owned by the client themselves and the physical boxes are located at their facility. Their lead IT professional, who has absolutely no real experience with Linux, has been maintaining the box and the OS. This last point is critical, because he has been updating any and everything on the server the second it appears on the list. They have indicated that, for them, this is normal procedure for their Windows servers.
This set off a few alarm bells.
The IT professional has been doing this for many weeks without us knowing, and the error started happening on the 5th of September. This coincided with two updates made by him, one of which was ligbcrypt11 amd64 1.5.3-2ubuntu4.1 . This has remained unchanged since September 5th.
Could this be causing the issue? Does anybody know of any problems afflicting specifically Chrome regarding the server sending no data?
An aside: I have attempted to use GDP to backtrace the problem, but I cannot get Apache to actually generate an error file out in the folder located in /tmp that I created. When I look at the logs it does say that a dump file could be located there, so the code I placed in apache2.conf is clearly working. Permissions and ownership have been set for the folder.
I made the following changes to try and get it to work:
etc/apache2.conf (file location)
CoreDumpDirectory /tmp/apache2-gdb-dump (code added)
/etc/sysctl.conf (file location)
kernel.core_uses_pid = 1 (code added)
kernel.core_pattern = /tmp (code added)
fs.suid_dumpable = 2 (code added)
There are so many things that could be happening that I just don't know where to start with this. This isn't my area of expertise.
Specifics:
VC++ 7; Program works on XP, but crashes on Win7; developed using VS2003.NET (old I know but it's what I have to work with and it works fine thank you very much)
I've got a program that runs great on XP (32-bit). However, I've recently tested it with Win7 and all kinds of choas breaks loose. My strong suspicion is how my program deals with registry keys.
NOTE: The program does not create or destroy keys, only queries for keys and returns interesting values. ("Interesting" described below)
In the simplest form, the program reads data from a SCSI attached device, and saves the data to a file on the host PC. The program queries the registry for SCSI adapters and returns the adapter IDs which the program uses to access the device.
To me, it doesn't look like the registry structure has changed from XP to Win7 but not 100% sure. Any insight on that would be great :)
Also, I read at: http://www.techsupportalert.com/content/how-windows7-vista64-support-32bit-applications.htm that the way Win7 does things is like a reflection. Does this change how I should query for the key? If so, any information on how to structure the query would be great.
I think what I need to know is:
Is it as simple as changing the hKey (or lpValueName) in the RegQueryValueEx method?
Or does this mean I need to change some other aspect of the RegQueryValueEx method?
Or something else entirely?
Thank you in advance!
It's worth running your application through the Application Verifier on your own machine first. Of particular interest is the LuaPriv section which will highlight instances where your application is doing operations that don't play well in Vista or Win-7. This should catch any time where you might be consulting registry locations that differ from in XP.
One thing to be aware of is that if you are reading registry entries created by another application then it's possible that they might be in a different place, eg in the 32-bit or 64-bit views, or virtualised to the per-user location (this will typically happen if a process ran thinking it could write anywhere, but didn't have admin privileges, so Windows will sandbox the registry writes into the virtualised area).
I've recently started to notice really annoying problems with VisualSVN(+server) and/or TortoiseSVN. The problem is occurring on multiple (2) machines. Both running Windows 7 x64
The VisualSVN-server is running Windows XP SP3.
What happens is that after say, 1 2 or 3 (or a bit more, but almost always at the same file) the commit just hangs on transferring data. With a speed of 0bytes/sec.
I can't find any error logs on the Server. I also just asked for a 45day trial of Enterprise Server for its logging capabilities but no errors there as well.
Accessing the repository disk itself is fast, I can search/copy/paste to that disk/SVN repo disk just fine.
The Visual SVN Server also does not use excessive amounts of memory nor CPU usage, which stays around 0-3%.
Both the Server as well as TortoiseSVN's memory footprint moves/changes which would indicate at least "something" is happening.
Committing with Eclipse (different project (PHP), different repository on the server) is going great. No slow downs, almost instant commits, with 1 file or 50files. The Eclipse plugin that I use is Subclipse.
I am currently quite stuck on this problem and it is prohibiting us from working with SVN right now.
[edit 2011-09-08 1557]
I've noticed that it goes extremely slow at 'large' files, for instance a 1700MB .resx (binary) or 77KB .h source (text) file. 'small' files > 10KB go almost instantly.
[edit 2011-09-08 1608]
I've just added the code to code.google.com to see if the problem is on my end or the server end. Adding to google code goes just fine, no hangs at all. 2,17MB transferred in 2mins and 37secs.
I've found and fixed the problem. It appeared to have been a faulty NIC, speedtest.net resulted in ~1mbit, shoving in a different NIC pushed this to the max of 60mbit and solving my commit problems.
I am installing a .net (CF) app using a cab, on a thin client running WinCE 6.0. When I first install it, everything is fine, and the app gets installed in the specified location.
Just out of curiosity, I clicked on the same CAB again and was greeted with "Not enough space" message. None of the files were modified...so it doesn't make any sense at all....
Are there any settings in the CAB I should be using to avoid this?
I have been using CAB for 3 years now and haven't seen this type of a message yet. The message would make sense if files were changed and got bigger. But if no change happened, something is off.
This might help:
http://www.vmware.com/pdf/vsphere4/r41/vsp_41_vm_admin_guide.pdf
(Ctrl-F type in "full" or Disk)
Is the hard drive on the think client almost completely full? It sounds to me like it has just enough space to install it and then when you try to execute it again, it can't find enough free space on the hard drive.
I think the installer only checks the registry to detect a previous installation of the same program and it does not check if the files from the previous installation are still present or not. If they were deleted, or the file system is not persistent, then the new installation process does not have anything to overwrite.
On top of that, even if the files are present, the installation will also has to make sure that the file sizes are the same (they could be zero due to some file system corruption for example). Still I might be forgetting some other edge situations.
I suppose for performance and consistency reasons it is easier just to ask for more free space.