Postgresql restore Error at or near "`" - linux

I have 800mb backup postgresql db, i even had hardtime to open the file because not enough memory.
I tried to restore the file, but i receive this error while restoring, does any one know how to fix
I run this command:
psql -U root -d mydatabase -f dbfile.sql
i receive message:
ERROR: syntax error at or near ""
LINE 1: INSERT INTOcv_balance` VALUES (4279704,3431,'2008-08-10 2...
please help

It looks like for some reason, a ` mark got either added after cv_balance or removed before cv_balance - look on the first line of your SQL file, it currently probably reads something like this:
INSERT INTO cv_balance` VALUES ...(continued)...
modify it to read like this:
INSERT INTO cv_balance VALUES ...(continued)...
(i.e. remove the errant backquote)
If you need an editor that can handle large files, try something like vim.

Related

How do I use Nagios to monitor a log file that generates a random ID

This the log file that I want to monitor:
/test/James-2018-11-16_15215125111115-16.15.41.111-appserver0.log
I want Nagios to read it this log file so I can monitor a specific string.
The issue is with 15215125111115 this is the random id that gets generated
Here is my script where the Nagios is checking for the Logfile path:
Veriables:
HOSTNAMEIP=$(/bin/hostname -i)
DATE=$(date +%F)
..
CHECK=$(/usr/lib64/nagios/plugins/check_logfiles/check_logfiles
--tag='failorder' --logfile=/test/james-${date +"%F"}_-${HOSTNAMEIP}-appserver0.log
....
I am getting the following output in nagios:
could not find logfile /test/James-2018-11-16_-16.15.41.111-appserver0.log
15215125111115 This number is always generated randomly but I don't know how to get nagios to identify it. Is there a way to add a variable for this or something? I tried adding an asterisk "*" but that didn't work.
Any ideas would be much appreciated.
--tag failorder --type rotating::uniform --logfile /test/dummy \
--rotation "james-${date +"%F"}_\d+-${HOSTNAMEIP}-appserver0.log"
If you add a "-v" you can see what happens inside. Type rotating::uniform tells check_logfiles that the rotation scheme makes no difference between current log and rotated archives regarding the filename. (You frequently find something like xyz..log). What check_logfile does is to look into the directory where the logfiles are supposed to be. From /test/dummy it only uses the directory part. Then it takes all the files inside /test and compares the filenames with the --rotation argument. Those files which match are sorted by modification time. So check_logfiles knows which of the files in question was updated recently and the newest is considered to be the current logfile. And inside this file check_logfiles searches the criticalpattern.
Gerhard

cassandra copy [Errno 13] Permission denied

Cassandra newbie here. I have just set up a proof of concept single node machine on Red Hat Linux. I finally got all of the permissions correct and started up the machine. I then created a keyspace called harvard, issues the use command to switch into harvard, and then created a table called hmxpc.
I then wanted to import a .csv file. I placed the .csv file in the cassandra folder just for simplicity, chmod 755 the file, and issued the following:
copy hmxpc (course_id, userid_di, certified, explored, final_cc_cname_di, gender, grade, incomplete_flag, last_event_di, loe_di, nchapters, ndays_act, nevents, nforum_posts, nplay_video, registered, roles, start_time_di, viewed, yob) from 'cassandra/HMXPC.csv' with header=true;
When I run it, I get the following error:
[Errno 13] Permission denied: 'import_harvard_hmxpc.err'
What am I doing wrong?
I just had the same issue. I figured it out by using the --debug flag.
My floats had ',' instead of '.' so my csv couldn't be parsed. CQLSH tried to write an err file describing the issue, but I was in /root, which cassandra can't write to. So I cd'ed to /tmp and did the same, this time getting I got errors showing that my floats couldn't be parsed
The problem ended up being a Red Hat permissions issue and had nothing to do with Cassandra. Thanks for looking.
I was getting the same error as show in screenshot_Errored. Moved the .csv file to .cassandra directory and was able to execute the same csql command as shown in screenshot_worked
Beyond the other cases that are described in other responses, the error may appear, as described below if an incorrect ordering or number of columns appears in the COPY command.
i.e consider having a CSV file with the following header line:
actor, added date, video id, character name, description, encoding, tags, title, user id
If I use the following COPY command:
cqlsh:killrvideo> COPY videos_by_actor(actor, added_date, character_name, description, encoding, tags, title, user_id, video_id) FROM 'videos_by_actor.csv' WITH HEADER = true;
I will get the Error 13:
Using 7 child processes
Starting copy of killrvideo.videos_by_actor with columns [actor, added_date, character_name, description, encoding, tags, title, user_id, video_id].
[Errno 13] Permission denied: 'import_killrvideo_videos_by_actor.err'
If I set the names of the columns correctly in the COPY commands as follows
cqlsh:killrvideo>COPY videos_by_actor(actor, added_date, video_id, character_name, description, encoding, tags, title, user_id ) FROM 'videos_by_actor.csv' WITH HEADER = true
then command completes successfully.
Using 7 child processes
Starting copy of killrvideo.videos_by_actor with columns [actor, added_date, video_id, character_name, description, encoding, tags, title, user_id].
Processed: 81659 rows; Rate: 5149 rows/s; Avg. rate: 2520 rows/s
81659 rows imported from 1 files in 32.399 seconds (0 skipped).
Here's my checklist for this rather non-specific (catch-all) cqlsh error "[Errno 13] Permission denied" for the containerized use cases (e.g. when using bitnami/cassandra:latest):
Make sure the path you are supplying to the COPY command is the internal (container) path, not an external one (host, PVC etc).
Make sure the CSV file has correct read permissions for the internal container user ID, not the external one (host, PVC etc), especially if the CSV was created in another containerized app (e.g. Jupyter Notebook).
If the file contains a header, our COPY command ends in WITH HEADER = true; (yes, omitting it also raises permission denied error...)
For example, assuming you have run your Cassandra container like this:
$ docker run -d --rm --name cassandra -v /tmp:/bitnami -u 1001 bitnami/cassandra:4.0
Then the COPY command issued in cqlsh to import a /tmp/random_data1.csv from the host should be:
> COPY dicts.dict1 (key, value) FROM '/bitnami/random_data1.csv' WITH HEADER = true;
and the /tmp/random_data1.csv file should be owned by user 1001 or accessible for reading for all users.
The most bizarre reason for this error is lack of write access... to the errors file (the path to which is left empty in the default config file). This is particularly likely if running Cassandra (container) as a non-root user.
To solve it, one needs to pass a custom config file (e.g. /bitnami/cqlshrc) when executing the client:
$ cqlsh -u <user> -p <pass> --cqlshrc=/bitnami/cqlshrc
There should be a sample config cqlshrc.sample shipped with your installation (use cd / && find | grep cqlshrc to find it).
Changes to be made in your custom config file:
# uncomment this section:
; [copy-from] -> [copy-from]
# uncomment and specify custom error file
; errfile = -> errfile = /bitnami/cassandra-errfile.txt
More info on the setting in question: errfile.

Config file finding unexpected $end, not sure why

I am using a custom config of collectd and for some reason I keep getting a failure when I try and run the service. Originally I had everything in one big file, but to make changing configs easier I want to separate out the settings for various plugins and components. Collectd has a Include option to do exactly this. It seems to work, but when collectd tries to grab the external part of the program I get the following error:
Parse error in file `/etc/collectd/collectd.conf.d/http.conf', line 1100 near `': syntax error, unexpected $end, expecting EOL
If I go in and copy paste direct into the server using vim, it will work. However when the package installs it won't. I know these kinds of errors can come from mismatched brackets or quotes or something, but his is not a problem in these files. Is there anything else which could cause such an error?
For the users they have similar error at you. For my case, i have not return line at the bottom of collectd.conf or plugins configuration files. For verify, you can use this command
$ xxd collectd.conf | tail -n1 | grep 0a
00001c0: 730a 0a s..
You must see the 0a at the end of the file
Note : You can replace collectd.conf by plugins configuration files

egrep command with piped variable in ssh throwing No Such File or Directory error

Ok, here I'm again, struggling with ssh. I'm trying to retrieve some data from remote log file based on tokens. I'm trying to pass multiple tokens in egrep command via ssh:
IFS=$'\n'
commentsArray=($(ssh $sourceUser#$sourceHost "$(egrep "$v" /$INSTALL_DIR/$PROP_BUNDLE.log)"))
echo ${commentsArray[0]}
echo ${commentsArray[1]}
commax=${#commentsArray[#]}
echo $commax
where $v is something like below but it's length is dynamic. Meaning it can have many file names seperated by pipe.
UserComments/propagateBundle-2013-10-22--07:05:37.jar|UserComments/propagateBundle-2013-10-22--07:03:57.jar
The output which I get is:
oracle#172.18.12.42's password:
bash: UserComments/propagateBundle-2013-10-22--07:03:57.jar/New: No such file or directory
bash: line 1: UserComments/propagateBundle-2013-10-22--07:05:37.jar/nouserinput: No such file or directory
0
Thing worth noting is that my log file data has spaces in it. So, in the code piece I've given, the actual comments which I want to extract start after the jar file name like : UserComments/propagateBundle-2013-10-22--07:03:57.jar/
The actual comments are 'New Life Starts here' but the logs show that we are actually getting it till 'New' and then it breaks at space. I tried giving IFS but of no use. Probably I need to give it on remote but I don't know how should I do that.
Any help?
Your command is trying to run the egrep "$v" /$INSTALL_DIR/$PROP_BUNDLE.log on the local machine, and pass the result of that as the command to run via SSH.
I suspect that you meant for that command to be run on the remote machine. Remove the inner $() to get that to happen (and fix the quoting):
commentsArray=($(ssh $sourceUser#$sourceHost "egrep '$v' '/$INSTALL_DIR/$PROP_BUNDLE.log'"))
You should use fgrep to avoid regex special interpretation from your input:
commentsArray=($(ssh $sourceUser#$sourceHost "$(fgrep "$v" /$INSTALL_DIR/$PROP_BUNDLE.log)"))

remy inliner command line tool returns path.existsSync is now called `fs.existsSync`

I'm trying to use inliner command line tool locally to combine some files. But I get the following error message in the console.
path.existsSync is now called `fs.existsSync`
So i went into /usr/local/lib/node_modules/inliner/bin/inliner and changed line 65 from:
if (path.existsSync(url))
to
if (fs.existsSync(url))
but I get still the same error message. Can anybody give me a hint what is wrong and how I can fix this?
There is already a question here but that didn't fix my problem. Or am I editing the wrong file?
Cheers
:fab
I got inliner working by using the -i command
#-i, --images don't encode images - keeps files size small, but more requests
inliner -i http://fabiantheblind.info/coding.html > test2.html

Resources