Why does this shell script add a return to the filename of the output file?
#!/bin/bash
/usr/bin/tail -n 1 /path/logchanged.csv >> "/path/logcontatenated.csv"
The filename is not called "logcontatenated.csv", but "logcontatenated.csv
"
I really can't find on the internet why this happens.
Could it be that you created that script using Windows? If the line ends in \r\n without trailing spaces the file name is interpreted as logcontatenated.csv\r. Try hd yourscript.sh to display a hexdump of your script. Line breaks should be only a single byte of 0a rather than two bytes of 0d 0a, i.e. make sure the byte before any 0a is NOT 0d. You could use dos2unix yourscript.sh to fix your script. You might need to install dos2unix first.
EDIT: Replaced 0c with 0d.
Related
I create a simple test file like this:
$ cat > test
blah
Now I run vi, and then :%!xxd to edit first bytes with FFD8 FF
00000000: ffd8 ffe0 0a blah.
and the I run :%!xxd -r.
file gives me NOT jpeg:
$ file test
test: Unicode text, UTF-8 text
And if I manage to get hexdump:
$ xxd test
00000000: c3bf c398 c3bf c3a0 0a .........
What am I doing wrong with xxd?
Thank you
When opening the file with vi, please make sure to say:
LC_ALL=C vi test
then edit and save the file with the shown procedure.
I have been comparing hash values between multiple systems and was surprised to find that PowerShells hash values are different than that of other terminals.
Linux terminals (CygWin, Bash for Windows, etc.) and Windows Command Prompt are all showing the same hash where as PowerShell is showing a different hash value.
This was tested using SHA256 but found the same issue when using other algorithms like md5.
Encoding Update:
Tried changing the PShell encoding but it did not have any effect on the returned hash values.
[Console]::OutputEncoding.BodyName
iso-8859-1
[Console]::OutputEncoding = [Text.UTF8Encoding]::UTF8
utf-8
GitHub PowerShell Issue
https://github.com/PowerShell/PowerShell/issues/5974
tl;dr:
When PowerShell pipes a string to an external program:
It encodes it using the character encoding stored in the $OutputEncoding preference variable
It invariably appends a trailing (platform-appropriate) newline.
Therefore, the key is to avoid PowerShell's pipeline in favor of the native shell's, so as to prevent implicit addition of a trailing newline:
If you're running your command on a Unix-like platform (using PowerShell Core):
sh -c "printf %s 'string' | openssl dgst -sha256 -hmac authcode"
printf %s is the portable alternative to echo -n. If the string contains ' chars., double them or use `"...`" quoting instead.
In case you need to do this on Windows via cmd.exe, things get even trickier, because cmd.exe doesn't directly support echoing without a trailing newline:
cmd /c "<NUL set /p =`"string`"| openssl dgst -sha256 -hmac authcode"
Note that there must be no space before | for this to work. For an explanation and the limitations of this solution, see this answer.
Encoding issues would only arise if the string contained non-ASCII characters and you're running in Windows PowerShell; in that event, first set $OutputEncoding to the encoding that the target utility expects, typically UTF-8: $OutputEncoding = [Text.Utf8Encoding]::new()
PowerShell, as of Windows PowerShell v5.1 / PowerShell (Core) v7.2, invariably appends a trailing newline when you send a string without one via the pipeline to an external utility, which is the reason for the difference you're observing (that trailing newline will be a LF only on Unix platforms, and a CRLF sequence on Windows).
You can keep track of efforts to address this problem in GitHub issue #5974, opened by the OP.
Additionally, PowerShell's pipeline is invariably text-based when it comes to piping data to external programs; the internally UTF-16LE-based PowerShell (.NET) strings are transcoded based on the encoding stored in the automatic $OutputEncoding variable, which defaults to ASCII-only encoding in Windows PowerShell, and to UTF-8 encoding in PowerShell Core (both on Windows and on Unix-like platforms).
In PowerShell Core, a change is being discussed for piping raw byte streams between external programs.
The fact that echo -n in PowerShell does not produce a string without a trailing newline is therefore incidental to your problem; for the sake of completeness, here's an explanation:
echo is an alias for PowerShell's Write-Output cmdlet, which - in the context of piping to external programs - writes text to the standard input of the program in the next pipeline segment (similar to Bash / cmd.exe's echo).
-n is interpreted as an (unambiguous) abbreviation for Write-Output's -NoEnumerate switch.
-NoEnumerate only applies when writing multiple objects, so it has no effect here.
Therefore, in short: in PowerShell, echo -n "string" is the same as Write-Output -NoEnumerate "string", which - because only a single string is output - is the same as Write-Output "string", which, in turn, is the same as just using "string", relying on PowerShell's implicit output behavior.
Write-Output has no option to suppress a trailing newline, and even if it did, using a pipeline to pipe to an external program would add it back in.
Linux terminals and PowerShell use different encodings. So real bytes produced by echo -n "string" are different. I tried it on my Linux Mint terminal and Windows 10 PowerShell. Here what I got:
Linux Mint:
73 74 72 69 6E 67
Windows 10:
FF FE 73 00 74 00 72 00 69 00 6E 00 67 00 0D 00 0A 00
It seems that Linux terminals use UTF-8 and Windows PowerShell uses UTF-16 with a BOM. Also in PowerShell you cannot use '-n' parameter for echo. So echo places newline characters \r\n (0D 00 0A 00) at the end of the "string".
Edit: As mklement0 said below, Windows PowerShell uses ASCII by default when piping.
I wrote a little Bash script and I'm having a problem while reading from the command line. I think its because I wrote the script on Windows. Here is the code:
read NEW_MODX_PROJECT
and the output of the debug mode
+ read $'NEW_MODX_PROJECT\r'
Finally here the error I get
': Ist kein gültiger Bezeichner.DX_PROJECT
I think in English it should mean "': is not a valid identifier.DX_PROJECT"
While writing it on Windows, it worked fine. I used console2 to test it which is using the sh.exe.
Your assertion is correct -- Windows uses CRLF line separators but Linux just uses a LF.
The reason for your strange error message is that while printing the name of your variable, it includes the carriage return as part of its name -- the terminal then jumps back to the first column to print the rest of the error message (which overwrites the beginning of the message with the end of it).
There are a set of utilities known as dos2unix and unix2dos which you can use to easily convert between formats, e.g.:
dos2unix myscript.sh
If you don't happen to have them, you can achieve the same using tr:
tr -d '\r' < myscript.sh > myscript-new.sh
Either will strip all the carriage returns and should un-confuse things.
I need to make my batch file replace a setting in "config.ini" for example:
color = 1A
how can I find and replace ONLY the 1A part instead of making another line.
Are you in Unix-like OS? if so, just use the sed command:
sed 's/1A/replacedstring/' config.ini
For more information: http://en.wikipedia.org/wiki/Sed
Esteemed Meld and Emacs/ESS users,
What I did:
Create a script.r using Emacs/ESS.
Make some modifications to script.r by pulling some lines of code from another_script.r
Reopen another_script.r (or script.r) in Emacs/ESS to continue working.
All the lines in another_script.r which were not pushed to script.r end with ^M
Some times it's the other way around - only the line that was pushed/pulled ends with ^M's. So far i haven't isolated exactly which action determines where the ^M's are placed. Either way i still end up with ^M's all over the place and i'd like to avoid getting them after using Meld!
FWIW: the directory is being synced by Dropbox; in Meld, Preferences > Encoding tab, "utf8" is entered in the text box; all actions are performed under Linux (Ubunt 12.04) with Meld v1.5.3, Emacs v23.3.1
Current workaround is running in a terminal: dos2unix /path/to/script.r which strips the ^Ms. But this shouldn't be necessary and I'm hoping some one here can tell me how to avoid these.
Cheers.
In a terminal i ran cat script.r | hexdump -C | head and amongst the output returned found a 0d 0a, which is DOS formatting for a new line (carriage return 0d immediately followed by a line feed 0a). I ran the same command on another_script.r i was merging with but only observed 0a, no 0d 0a, indicating Unix formatting.
To check further if this was the source of the ^M line endings, script.r was converted to unix formatting via dos2unix script.r & verified that 0d 0a was converted to 0a using hexdump -C as above. I performed a merge using Meld in attempting to replicate the process which yielded ^M line endings in my script's. I re-oppened both files in Emacs/ESS and found no ^M line endings. Short of converting script.r back to dos formatting and repeating the above procedure to see if the ^M line endings re-appear, i believe i've solved my ^M issue, which simply is that, unbeknownst to me, one of my files was dos formatted. My take home message: in a Windows dominated environ, never assume that one's personal linux environment doesn't contain DOS bits. Or line endings.