I am using the Dragonfly gem in a Ruby-on-Rails app to generate converted image URL's on the fly but when I try to access the image's url I get the following error:
HTTP Error 400. The request URL is invalid.
This seems to be coming from IIS and only occurs when the url is bigger than 256 Bytes. How would one go about increasing the maximum url size for a Helicon Zoo project within ISS. I have already tried the solutions described here with no success.
Edit:
This is the link (with the domain redacted):
http://{domain}.com/media/W1siZiIsIjIwMTQvMDgvMTkvNmVqb3JuMmd4aF9BSVRfT0ZGSUNFX1RSQVNQQVJFTlRfRURHRS5wbmciXSxbInAiLCJjb252ZXJ0IiwiLWZ1enogMjUlIC1maWxsIFwiI2NjY2NjY1wiIC1vcGFxdWUgd2hpdGUiXSxbInAiLCJyb3RhdGUiLCI1MC41MDY1MDU2NjI3NzkzMiJdLFsicCIsImNvbnZlcnQiLCItZnV6eiAxJSAtdHJhbnNwYXJlbnQgd2hpdGUiXV0?sha=3062766b
Which was generated by this code:
area_url = Area.find(params[:id]).image.convert('-fuzz 25% -fill "#cccccc" -opaque white')
area_url = area_url.rotate(params[:theta]).convert('-fuzz 1% -transparent white').url
I ended up solving this problem by returning the image directly from this controller rather than returning the Dragonfly URL.
area_overlay = Area.find(params[:id]).image.convert('-fuzz 25% -fill "#cccccc" -opaque white')
area_overlay = area_overlay.rotate(params[:theta]).convert('-fuzz 1% -transparent white').file
send_file area_overlay, :type => 'image/png', :disposition => 'inline'
Related
We are running into an issue that seems unique to AVPlayer. We've built a new architecture for our HLS streams which makes use of relative URLs. We can play these channels on a number of other players just fine, but when trying to build using AVPlayer, the channel gets 400 errors requesting either child manifests/segments with relative URLs.
Using curl, we are able to get a 200 success by getting to a url like: something.com/segmentfolder/../../abcd1234/videosegment_01.mp4
Curl is smart enough to get rid of the ../../ and create a valid path so the actual request (which can be seen using curl -v) looks something like: something.com/abcd1234/videosegment_01.mp4
Great. But AVPlayer doesn't do this. So it makes the request as is, which leads to a 400 error and it can't download the segment.
We were able to simulate this problem with Swift playground fairly easily with this code:
let hlsurl = URL(string: "website.com/segmentfolder/../../abc1234/videosegment.mp4")!
print(hlsurl)
print(hlsurl.standardized)
The first print shows the URL as is. Trying to GET that URL leads to a 400. The second print properly handles it by adding .standardized to the URL. This leads to a 200. The problem is, this only works for the top level/initial manifest, it doesn't work for all the child manifests and segments.
let url = URL(string: "website.com/segmentfolder/../../abc1234/videosegment.mp4")!
let task = URLSession.shared.dataTask(with: url.standardized) {(data, response, error) in
guard let data = data else { return }
print(String(data: data, encoding: .utf8)!)
}
So question, does AVPlayer support relative URLs? If so, why can't it handle the ../../ in the URL path like other players and curl can? Is there a special way to get it to trigger standardizing ALL URL requests?
Any help would be appreciated.
I try to add some images with ajax via DirectUpload / ActiveStorage / Rails 6.
I use the prerequisites into of ActiveStorage support, for use DirectUpload with Jquery :
https://edgeguides.rubyonrails.org/active_storage_overview.html#integrating-with-libraries-or-frameworks
const upload = new DirectUpload(file, url)
upload.create((error, blob) => {
if (error) {
// Handle the error
} else {
// Add an appropriately-named hidden input to the form with a
[..]
console.log(blob.key);
}
})
On my host, it works for all files. But when I try to publish my app into my hoster, I have an error for some files, always the same, after the request of DirectUpload :
Completed 422 Unprocessable Entity in 2ms (ActiveRecord: 0.0ms | Allocations: 689)
I looked the XHR requests into my webtools browser, but the payload seems the same into a file which works and another which fails :
{id: 219, key: "v2v1aqlk8gyygcc4smjeh0bbuc59", filename: "groupama logo.jpeg",…}
id: 219
key: "v2v1aqlk8gyygcc4smjeh0bbuc59"
filename: "logo.jpeg"
content_type: "image/jpeg"
metadata: {}
byte_size: 17805
checksum: "3GIVi2kNKClfH+d9HGYOfkA=="
created_at: "2020-04-09T08:25:40.000+02:00"
signed_id: "eyJfcmFpbHMiOnsibWVzc2zaFnZSI6IkJBaHBBZHM9IiwiZXhwIjpudWxsLCJwdXIiOiJibG9iX2lkIn19--7c0750cb8c86a955a04fa9a11dc5389cdeb5e7b0"
attachable_sgid: "BAh7CEkiCGdpZAY6BkVUSSIxsaZ2lkOi8vYXBwL0FjdGl2ZVN0b3JhZ2U6OkJsb2IvMjE5P2V4cGlyZXNfaW4GOwBUSSIMcHVycG9zZQY7AFRJIg9hdHRhY2hhYmxlBjsAVEkiD2V4cGlyZXNfYXQGOwBUMA==--64a945c38dc5d85c05156da50b9c38819b106e10"
direct_upload: {,…}
url: "http://localhost:8491/rails/active_storage/disk/eyJfcmFpbHMiOnsibWVzc2FnZSI6IkJBaDdDVG9JYTJWNVNTSWhkakoyTVdGeGJHczRaM2w1WjJOak5ITnRhbVZvTUdKaWRXTTFPUVk2QmtWVU9oRmpiMjaUwWlc1MFgzUjVjR1ZKSWc5cGJXRm5aUzlxY0dWbkJqc0dWRG9UWTI5dWRHVnVkRjlzWlc1bmRHaHBBbzFGT2cxamFHVmphM04xYlVraUhUTkhTVlpwTW10T1MwTnNaa2dyT1VoSFdVOW1hMEU5UFFZN0JsUT0iLCJleHAiOiIyMDIwLTA0LTA5VDA2OjMwOjQwLjg5NFoiLCJwdXIiOiJibG9iX3Rva2VuIn19--a2acedc0924f735c5cc08db8c4b76f76accc3c8d"
headers: {Content-Type: "image/jpeg"}
I tried this solution, by the monkey patch doesn't works for me, and another solution seems not working :
Rails API ActiveStorage DirectUpload produce 422 Error InvalidAuthenticityToken
I noticed, when I try to upload the logo image file without use DirectUpdate into input file, the file is correctly well send to my server.
= f.file_field :logos, direct_upload: true
Do you have any idea to test ?
My issue was coming with the IO which was use for copy the file. Into the ActiveStorage::DiskController#update, Rails use the request.body and IO.CopyStream for create the file, and a checksum file was done for verifiy the file created.
And the check fail and throw the 422 http error.
I noticed than the IO stream, in dev mode was a String_IO, whereas on my hoster, the IO was a Uswgi_IO. Because my hoster deliver the ruby on rails application with Uswgi.
The uwsgi_io not contains a length or size methods, and when ActiveStorage create the file with this IO, the size of file was weird. Weirdly too large.
I noticed if the RAW_POST_DATA was assign, then the request.body return a String_IO. And into the request.raw_post method, the body was read directly with the request.content_length :
raw_post_body.read(content_length)
https://api.rubyonrails.org/classes/ActionDispatch/Request.html#method-i-body
I created a new controller which inherits of ActiveStorage::DiskController, in order to assign the RAW_POST_DATA, before the Disk#update action.
class UploadController < ActiveStorage::DiskController
def update
request.env['RAW_POST_DATA'] = request.body.read(request.content_length)
super
end
end
And after, I override the ActiveStorage disk#update route by mine :
put '/rails/active_storage/disk/:encoded_token', to: 'upload#update
And it works !
ActionText works with ActiveStorage for store the images, and I had the same issue with the images too large.
My patch allows to make works ActionText on my hoster too.
Kudos for finding the culprit here, I was having the exact same issue on a uwsgi setup and I was trying to figure out why the content length was different when I found your post. Thanks for sharing !
I just took a slightly different approach with the fix as I didn't want to hack the AS routes, so I created a config/initializers file with the following code :
Rails.configuration.to_prepare do
ActiveStorage::DiskController.class_eval do
before_action :set_raw_post_data, only: :update
def set_raw_post_data
request.env['RAW_POST_DATA'] = request.body.read(request.content_length)
end
end
end
Maybe it would be worth creating an issue in https://github.com/unbit/uwsgi to let them know about this ?
I am trying to load images from a local folder using fabric.js in node.
There seems to be very little up to date documentation on how to do this.
Most example use fabric.Image.fromURL(imageurl)
As far as I'm aware, this only works for web urls, not local paths.
Correct me if I'm wrong, but I have tried
fabric.Image.fromURL(imgpath, (img) => {
...
}
which throws the error Coul not load img: /image/path/img.jpg
Where
fs.readFile(imagepath, (err, i) => {
...
})
will successfully read the file, i will be a buffer.
What is the correct way to load a local image.
I know there is a fabric.Image.fromObject but I have no idea what type of object it wants.
I am currently loading the image into a 2d canvas object, converting it with canvas.toDataURL() and putting that url into fabric.Image.fromURL() which works but converting the image to a url is very slow due to large images. There must be a way to load the image directly and avoid this problem.
If you are using fabricjs 3+, that uses the new jsdom, you can use the file urls!
fabric.Image.fromURL(file://${__dirname}${filepath});
Check here on the fabricJS codebase how they handle reading files in browser and node for the visual test images
https://github.com/fabricjs/fabric.js/blob/master/test/lib/visualTestLoop.js#L139
try this one:
fabric.Image.fromURL(require("../../assets/mockup/100.png"), (img) => {...}
I have been looking at the mobify.js website for a while now, but I fail to understand the benefits of using it. (I am stumped to see why would one replace all the images on the page by GrumpyCat image?).
Could you kindly point me to a clear and lucid example, wherein, I can see that depending on the browser resolution my image size changes.
I have done the following tasks till now:
0. Included mobify.js header information
1. Used the mountains.jpg and forest.jpg image in my hosted website (The page contains only these two images)
2. Request the page from a desktop machine, from a tablet (Samsung Galaxy 10 inch), from an android mobile phone.
3. In all the three cases, I see the same image getting downloaded, the size of the image stays the same in all the cases.
I understand that the magic of size reduction can't happen on the fly, but how do I achieve this?
I realize that the Grumpy Cat example is a bit cheeky, but the same concept applies to solve your problem. Instead of replacing the images with Grumpy Cat images, you could write some logic to replace the images with lower-resolution images (i.e. mountains-320.jpg and forest-320.jpg).
With Mobify.js, you need to write the adaptations in the JavaScript snippet that you added to your site. So, to load smaller images for mobile, you could define the path to the lower resolution image in your original HTML like this:
<img src="mountain.jpg" data-mobile-src="mountain-320.jpg" />
<img src="forest.jpg" data-mobile-src="forest-320.jpg" />
Then, in the JavaScript snippet, you could modify it to grab the image in the data-mobile-src attribute instead like this:
if (capturing) {
// Grab reference to a newly created document
Mobify.Capture.init(function(capture){
// Grab reference to the captured document in progres
var capturedDoc = capture.capturedDoc;
var imgs = capturedDoc.getElementsByTagName("img[data-mobile-src]");
for(var i = 0; i < imgs.length; i++) {
var img = imgs[i];
var ogImage = img.getAttribute("x-src");
var mobileImage = img.getAttribute("data-mobile-src");
img.setAttribute("x-src", mobileImage);
img.setAttribute("old-src", ogImage);
}
// Render source DOM to document
capture.renderCapturedDoc();
});
}
Then, you'll see that the mobile site will download and render mountain-320.jpg or forest-320.jpg, but it will not download mountain.jpg or forest.jpg.
Just out of curiousity, what site are you wanting to use Mobify.js on?
Long-time reader, first-time poster.
I am using node v0.6.6 on OS X 10.7. I have not yet tried this in any other environment. I am using this client: https://github.com/elbart/node-memcache
When I use the following code, data randomly contains a few more bytes (as reported by console.log()), which leads to this image: http://imgur.com/NuaK4 (and many other JPG do this). favicon seems OK and HTML/CSS/javascript all work.
In other words: if I request the image, ~70% of the time the image is returned correctly; the other 30% - data reports a few more bytes and the image appears corrupt in the browser.
client.get(key, function(err, data) {
if (err) throw err;
if (data) {
res.writeHead(200, {'Content-Type': type, 'Content-Length': data.length});
console.log('Sending with length: ' + data.length);
res.end(data, 'binary');
}
});
I have been messing with this for several hours and I can honestly say I am stumped. I am hoping someone can show me the error in my ways. I tried searching if there was a way to properly store binary data with memcache but there's no relevant information.
Extra information: it happens with various JPG images; all images are around 100-300KB or less in filesize. For example, one image has reported the following sizes: 286442, 286443, 286441. This problem DOES NOT occur if I straight read data from disk and serve it with node.
Thanks in advance.
Edit I updated my node version and issue persists. Actual test source photo and corrupt photo can be found in my comment below (stackoverflow doesn't permit more links).
Elbart's node-memcache does not handle binary values correctly for the reasons Steve Campbell suggests: node-memcache does not give the client direct access to the buffer. By stringifying the buffers, binary data is corrupted.
Use the 'mc' npm. ( npm install mc )
Caveat: I'm the author of the 'mc' npm. I wrote it specifically to handle binary values over memcache's text protocol.