MySQL dump works on mysql repo but broken in ubuntu repo - ubuntu-14.04

My colleague gave me a mysql.sql.xz in 80mb.
Exported from MySQL 5.6.26
I setup MySQL 5.6.28 in ubuntu 14.04, it only imports partial and broken, syntax error, as I m not able to unzip the files, so i only guess it reads partial SQL, where SQL not correctly ended.
I tried MariaDB 10 and it has the same problem.
I do unxz and only get part of the SQL file, and the last SQL in the file is incomplete.
I tried mysql.com and installed MySQL 5.6.29. It successfully dump in the .xz file.
Can anyone points out what possible problem here?
I have re-install quite many times now, the only working result is get the repo from mysql.com
Thanks in advance

If the file is truncated then that is the cause of your problem, right? It really sounds like you have an issue in transferring the file rather than any thing incorrect at either database instance

Related

MongoDB recovery database from Compass

I was working on a new project. There was a lot of data that I needed to add locally. I used mongoose and nodejs for this. But windows 11 where the project is installed does not open now. Fortunately, I have ubuntu on my computer and I can access windows files.
My question is that; There is a data folder in the mongoDB directory in the program files. Is there any chance I can recover my old data from here?

Azure does not update files on a web server (Pyramid wsgi-app)

I have a project Pyramid Application. I store it on git and pull the branch to the server when I need update. Until now I was working on Koding but lately decided to check out azure and it's developer's benefits.
After I've created ubuntu server virtual machine (which actually is what runs under Koding) I've downloaded my project using git pull, but forgot to change the branch to the one I'm working on atm. So I did, but server still shows me the old page (like I didn't checkout the other branch). So I checked sftp and files show me they have been updated.
Why am I still seeing the old page?
Now I know the reason why! (at least I think, but please. correct me if I'm wrong)
I noticed that there was .pyc file for every .py file, and those are "compiled" (bit of simplification?) python files as I understood it. And it seemed to me that they were not "compiled" on app launch. But they compiled with setup.py... edit dates suggest that.
So the reason why I didn't see the changes I did in code was that... http.server was using old "compiled" files instead of the source files! But is that normal/expected behaviour? Dunno. There are many other quetions now, but main question was answered so I mark this as answer until someone gives better answer.

Restoring a MySQL database on a potentially different MySQL installation?

I have a broken installation of Ubuntu 14.04 - it won't boot, but I won't say anymore about that because that's not what I'm asking about really. I have a MySQL database (created using v5.5) on the broken Ubuntu installation and I need that data. I can get at the raw MySQL database files by mounting the broken installation onto another machine.
I actually need the database to be imported into a MySQL v5.1 installation. I tried copying the raw database files (e.g. the directory at /var/lib/mysql/dbname) into the same directory on the working OS installation. At first, it seemed like it worked, I can see the database, I can use it and I can list the tables. But it turns out that even though I can see the tables in the db, any attempts to describe or use them in any way give the 'table doesnt exist` error.
Ideally, I'd love to be able to use msqldump and then import the database the proper way, but how can I get a dump of the database if it's not part of the MySQL installation (remember, I can't boot into the installation, it's broken).
Of course, mysqldump is the most preferable solution, but if it's not possible to use that utility with the raw database files as input, then I'm willing to try anything that might work.
Of course the first thing you should do is to install the same version of MySQL as the original - if you're directly using the raw data files, keeping things as identical to the original as possible is a must! The same applies to paths, make sure the new installation and data files are placed in the same directory path that they were originally.
Once you have this, you can mysqldump the tables and use that to import into a clean, new installation.

Cyberduck uploads break node-dev/up/hotnode

I've been using Cyberduck 4.2.1 to connect to my EC2 instance to edit my Node projects. I've used Node-dev to reload my project/server as files are updated, but if I save the files through Cyberduck's Edit command, the server never really reloads and usually crashes.
I've tested with a few different editors (TextMate, Dashcode) with the same result. Node-dev restarts correctly when I edit files from the terminal. I have tried a few others that do rougly the same thing, hotnode and up. They all work when editing via Terminal, but fail when I edit files through Cyberduck. I think it has something to do with the way Cyberduck replaces the remote files when it is saved.
Does anyone know what might be causing this, and maybe suggest some changes to these github projects? If not, are there better Mac FTP clients that might not have this issue?
I don't know about Node-dev, but my educated guess is that it crashes because it reads a partially uploaded file. I suggest to try the Upload with temporary filename feature available as a hidden option in Cyberduck.
You can try CyberDuck 6.6.2 ....It works for me

couchdb log files missing, can't get any to be created :-(

I'm running couchdb 1.0.1 on ubuntu and everything is working OK - except that I've just seen that my log files are non existent. They seem to have been like this for nearly a year, but to be fair I haven't really been using the system as it is a test bed for a project I've just picked up again.
/var/log/couchdb contained 2 files. An old (many months!) couch.log.1 and a couch.log with size 0 - which is suspicious. I've deleted the old files and now tried restarting couch, but the log files stubbornly stay absent!
I've restarted couch using
/etc/init.d/couchdb restart
But no joy.
My local.ini file has this entry;
[log]
level = debug
file = /var/log/couchdb/couch.log
And /var/log/couchdb is owned by couchdb and is in group couchdb so I don't think it is a permission issue. There is plenty of disk space on the server too.
I've rebooted the server as well in frustration - no difference.
How do I persuade couchdb to start logging anything again? The reason it has become an issue is that I'm trying to PUT some standalone attachments, but only the small ones are working so I'm trying to look in my (non-existent) log files to see what the problem might be.
Any ideas?
There is a possibility that the log file configuration is being set by some other .ini file.
Issue a GET request to http://localhost:5984/_config/log to see what CouchDB has set.
I had stuff like this happen to me because I had installed CouchDB multiple times using different methods. (compiling from source, using apt, the install script that was put out by CouchOne at one point, etc.) It was hard to figure out exactly what local.ini was the real one!
OK so it looks as if my LIVE ini files were actually at /usr/local/etc/couchdb/local.ini and not /etc/couchdb/local.ini
And the real logs were in /usr/local as well.
Not quite sure why I had both sets, I guess I had installed couchdb a couple of times in the past and I was looking in the legacy files by mistake!
Hope this helps someone else ... I have been scratching my head for a couple of hours over it now!

Resources