UPDATE AT THE END: SOURCE OF PROBLEM IDENTIFIED
I have a page, based on a form that has been stripped of most computed fields from the previous verion of the app (so docs can actually pass the computewithform function), and I have an issue while saving documents. I have documents that were created in the Notes client (previous app verions), that have been converted to mime, and some documents that were created in the new xpages app, with CKEditor, saved in a rich text field that is flagged to store as MIME as well.
I have a button to "publish" the document. The code works fine for documents created in the new app, but it seems to stop somewhere for documents that were created in the previous app version.
Here's what IE traps as far as http traffic goes:
A case that works:
/mydb.nsf/page.xsp?action=editDocument&documentId=353B2 HTTP POST 302 244 B 218 ms cliquer 0 31 0 187 0 1761
/mydb.nsf/xPublish?OpenAgent&docid=487A3447CFA36BF085257EE400626485 HTTP GET 302 190 B 141 ms cliquer 218 16 125 0 0 1620
http://myserver.com/mydb.nsf/dx/test_13col00 HTTP GET 200 text/html 55.71 Ko 312 ms cliquer 359 0 0 32 0 1308
A case that doesn't work:
http://myserver.com/mydb.nsf/page.xsp?action=editDocument&documentId=353BA HTTP POST 302 244 B 188 ms cliquer 0 32 0 156 0 156
/mydb.nsf/xPublish?OpenAgent&docid=E0E13322928B8F9685257EE400628B0A HTTP (Abandonned) 193 B < 1 ms cliquer 188 0 0 0 0 156
The code in the "Publish" button is:
//set status
if(getComponent("publishLater1").getValue() == "1") {
pageDocument.replaceItemValue("status", "Scheduled Publication");
} else {
pageDocument.replaceItemValue("status", "To Be Published");
}
//so we know the document has been saved (for drafts, when cancelled)
pageDocument.replaceItemValue("hasBeenSaved", "1");
//init some fields (res_title, ...)
if(!(pageDocument.hasItem("res_title")) || pageDocument.getItemValueString("res_title")==""){
pageDocument.replaceItemValue("res_title", buildResTitle(pageDocument.getItemValueString("subject")));
}
//set VERKEY AND VERNUMBER if not already set
if(pageDocument.getItemValueString("VERKEY")==""){
pageDocument.replaceItemValue("VERKEY", #Unique());
}
if(pageDocument.getItemValueString("VERNUMBER")==""){
pageDocument.replaceItemValue("VERNUMBER", 1);
}
//save pageDocument
pageDocument.save();
//send to publish agent
//remove the lock doc
//unlockDoc(pageDocument.getDocument().getUniversalID());
//for scheduled publications, a LotusScript agent will do the work
var res=facesContext.getExternalContext().getResponse();
if(getComponent("publishLater1").getValue() == "0") {
// Now load the publish Agent
var baseUrl = #Left(facesContext.getExternalContext().getRequestContextPath(),".nsf") + ".nsf/";
facesContext.getExternalContext().redirect(baseUrl + "xPublish?OpenAgent&docid=" + pageDocument.getDocument().getUniversalID());
//res.sendRedirect(xPublish?OpenAgent&docid=" + pageDocument.getDocument().getUniversalID(), false);
} else {
//send to the drafts view, to show it has the clock icon in the draft view
context.redirectToPage("adminDrafts.xsp");
}
I'll spare the détails of the LotusScript agent that is called (xPublish), but the redirect in that code is done that way:
Print "Location:" + db.Filepath + "/dx/" + res_title
According to IE's http log, it seems that something is not going quite right while running the code in the button, and it causes the http post to be abandonned and therefore, the call to the LotusScript agent is also abandonned, not redirecting the user to the newly published page. Instead, the user is redirected to this URL:
http://myserver.com/mydb.nsf/page.xsp?action=editDocument&documentId=353BA
The big problem here is that this page (the draft version) is deleted in the LotusScript agent, so that URL gives an error page.
If you Wonder why the publish code is in LotusScipt, it's because we also have a scheduled agent that runs daily and publishes "scheduled publications" set to be published in the future. It is to avoid have to maintain both SSJS and LotusScript code.
Any clues to why this happens?
UPDATE
Ok, it seems that the code works OK, but it's the redirection in the LotusScript agent that doesn't do the job. This is what I was using to redirect to the page that was just published:
Print "Location: http://ourserver.com/mydb.nsf/dx/" + res_title
It was working at one point, but now it seems to be causing issues. the funny thing is that the agent works fine with documents we create from scratch, but not for documents that have been created in the previous version of the application... Still no clue how to fix that. What is the way to do a redirect from LotusScript for xpages?
Ok, I got this all worng. It is still a bit weird, as it was running OK for new documents but not OK for the ones created in the previous version of the application, but I was calling the LotusScript agent the wrong way.
By looking at how it was done in the IBM Wiki Template, I noted that they were calling a LotusScript agent in a different way, and I tried that. Turns out it works great: the code is called and the redirection is made without any issue.
Here is the way I now call my agent and do the redirect:
var agent = database.getAgent("xPublish");
var res = facesContext.getExternalContext().getResponse();
agent.runOnServer(pageDocument.getNoteID());
res.sendRedirect(#Left(facesContext.getExternalContext().getRequestContextPath(),".nsf")+".nsf/dx/"+pageDocument.getItemValueString("res_title"));
As I said, not sure why my original code stopped working, and had problems only with docs created in the previous version of the app, but the new code works on all docs, all the time. If IBM does it this way, I guess it might be the right way.
That wiki app has a lot of code in it! Have a look at it to get some valuable pièces of code or to get inspiration!!!
Related
I try to add some images with ajax via DirectUpload / ActiveStorage / Rails 6.
I use the prerequisites into of ActiveStorage support, for use DirectUpload with Jquery :
https://edgeguides.rubyonrails.org/active_storage_overview.html#integrating-with-libraries-or-frameworks
const upload = new DirectUpload(file, url)
upload.create((error, blob) => {
if (error) {
// Handle the error
} else {
// Add an appropriately-named hidden input to the form with a
[..]
console.log(blob.key);
}
})
On my host, it works for all files. But when I try to publish my app into my hoster, I have an error for some files, always the same, after the request of DirectUpload :
Completed 422 Unprocessable Entity in 2ms (ActiveRecord: 0.0ms | Allocations: 689)
I looked the XHR requests into my webtools browser, but the payload seems the same into a file which works and another which fails :
{id: 219, key: "v2v1aqlk8gyygcc4smjeh0bbuc59", filename: "groupama logo.jpeg",…}
id: 219
key: "v2v1aqlk8gyygcc4smjeh0bbuc59"
filename: "logo.jpeg"
content_type: "image/jpeg"
metadata: {}
byte_size: 17805
checksum: "3GIVi2kNKClfH+d9HGYOfkA=="
created_at: "2020-04-09T08:25:40.000+02:00"
signed_id: "eyJfcmFpbHMiOnsibWVzc2zaFnZSI6IkJBaHBBZHM9IiwiZXhwIjpudWxsLCJwdXIiOiJibG9iX2lkIn19--7c0750cb8c86a955a04fa9a11dc5389cdeb5e7b0"
attachable_sgid: "BAh7CEkiCGdpZAY6BkVUSSIxsaZ2lkOi8vYXBwL0FjdGl2ZVN0b3JhZ2U6OkJsb2IvMjE5P2V4cGlyZXNfaW4GOwBUSSIMcHVycG9zZQY7AFRJIg9hdHRhY2hhYmxlBjsAVEkiD2V4cGlyZXNfYXQGOwBUMA==--64a945c38dc5d85c05156da50b9c38819b106e10"
direct_upload: {,…}
url: "http://localhost:8491/rails/active_storage/disk/eyJfcmFpbHMiOnsibWVzc2FnZSI6IkJBaDdDVG9JYTJWNVNTSWhkakoyTVdGeGJHczRaM2w1WjJOak5ITnRhbVZvTUdKaWRXTTFPUVk2QmtWVU9oRmpiMjaUwWlc1MFgzUjVjR1ZKSWc5cGJXRm5aUzlxY0dWbkJqc0dWRG9UWTI5dWRHVnVkRjlzWlc1bmRHaHBBbzFGT2cxamFHVmphM04xYlVraUhUTkhTVlpwTW10T1MwTnNaa2dyT1VoSFdVOW1hMEU5UFFZN0JsUT0iLCJleHAiOiIyMDIwLTA0LTA5VDA2OjMwOjQwLjg5NFoiLCJwdXIiOiJibG9iX3Rva2VuIn19--a2acedc0924f735c5cc08db8c4b76f76accc3c8d"
headers: {Content-Type: "image/jpeg"}
I tried this solution, by the monkey patch doesn't works for me, and another solution seems not working :
Rails API ActiveStorage DirectUpload produce 422 Error InvalidAuthenticityToken
I noticed, when I try to upload the logo image file without use DirectUpdate into input file, the file is correctly well send to my server.
= f.file_field :logos, direct_upload: true
Do you have any idea to test ?
My issue was coming with the IO which was use for copy the file. Into the ActiveStorage::DiskController#update, Rails use the request.body and IO.CopyStream for create the file, and a checksum file was done for verifiy the file created.
And the check fail and throw the 422 http error.
I noticed than the IO stream, in dev mode was a String_IO, whereas on my hoster, the IO was a Uswgi_IO. Because my hoster deliver the ruby on rails application with Uswgi.
The uwsgi_io not contains a length or size methods, and when ActiveStorage create the file with this IO, the size of file was weird. Weirdly too large.
I noticed if the RAW_POST_DATA was assign, then the request.body return a String_IO. And into the request.raw_post method, the body was read directly with the request.content_length :
raw_post_body.read(content_length)
https://api.rubyonrails.org/classes/ActionDispatch/Request.html#method-i-body
I created a new controller which inherits of ActiveStorage::DiskController, in order to assign the RAW_POST_DATA, before the Disk#update action.
class UploadController < ActiveStorage::DiskController
def update
request.env['RAW_POST_DATA'] = request.body.read(request.content_length)
super
end
end
And after, I override the ActiveStorage disk#update route by mine :
put '/rails/active_storage/disk/:encoded_token', to: 'upload#update
And it works !
ActionText works with ActiveStorage for store the images, and I had the same issue with the images too large.
My patch allows to make works ActionText on my hoster too.
Kudos for finding the culprit here, I was having the exact same issue on a uwsgi setup and I was trying to figure out why the content length was different when I found your post. Thanks for sharing !
I just took a slightly different approach with the fix as I didn't want to hack the AS routes, so I created a config/initializers file with the following code :
Rails.configuration.to_prepare do
ActiveStorage::DiskController.class_eval do
before_action :set_raw_post_data, only: :update
def set_raw_post_data
request.env['RAW_POST_DATA'] = request.body.read(request.content_length)
end
end
end
Maybe it would be worth creating an issue in https://github.com/unbit/uwsgi to let them know about this ?
I want to clear all pending_update_count in my bot!
The output of below command :
https://api.telegram.org/botxxxxxxxxxxxxxxxx/getWebhookInfo
Obviously I replaced the real API token with xxx
is this :
{
"ok":true,"result":
{
"url":"",
"has_custom_certificate":false,
"pending_update_count":5154
}
}
As you can see, I have 5154 unread updates til now!! ( I'm pretty sure this pending updates are errors! Because no one uses this Bot! It's just a test Bot)
By the way, this pending_update_count number are increasing so fast!
Now that I'm writing this post the number increased 51 and reached to 5205 !
I just want to clear this pending updates.
I'm pretty sure this Bot have been stuck in an infinite loop!
Is there any way to get rid of it?
P.S:
I also cleared the webhook url. But nothing changed!
UPDATE:
The output of getWebhookInfo is this :
{
"ok":true,
"result":{
"url":"https://somewhere.com/telegram/webhook",
"has_custom_certificate":false,
"pending_update_count":23,
"last_error_date":1482910173,
"last_error_message":"Wrong response from the webhook: 500 Internal Server Error",
"max_connections":40
}
}
Why I get Wrong response from the webhook: 500 Internal Server Error ?
I think you have two options:
set webhook that do nothing, just say 200 OK to telegram's servers. Telegram wiil send all updates to this url and the queque will be cleared.
disable webhook and after it get updates by using getUpdates method, after it, turn on webhook again
Update:
Problem with webhook on your side. You can try to emulate telegram's POST query on your URL.
It can be something like this:
{"message_id":1,"from":{"id":1,"first_name":"FirstName","last_name":"LastName","username":"username"},"chat":{"id":1,"first_name":"FirstName","last_name":"LastName","username":"username","type":"private"},"date":1460957457,"text":"test message"}
You can send this text as a POST query body with PostMan for example, and after it try to debug your backend.
For anyone looking at this in 2020 and beyond, the Telegram API now supports clearing the pending messages via a drop_pending_updates parameter in both setWebhook and deleteWebhook, as per the API documentation.
Just add return 1; at the end of your hook method.
Update:
Commonly this happens because of queries delay with the database.
I solved is like this
POST tg.api/bottoken/setWebhook to emtpy "url"
POST tg.api/bottoken/getUpdates
POST tg.api/bottoken/getUpdates with "offset" last update_id appeared before
doing this serveral times
POST tg.api/bottoken/getWebhookInfo
had a look if all away.
POST tg.api/bottoken/setWebhook with filled "url"
If you are using webhook, you can follow these steps
On your web browser, enter the following url with your right value of bot
https://api.telegram.org/bot/getWebhookInf
You will get a result like this on your screen
{"ok":true,"result":{"url":"url_value",...}}
On the displayed result, copy the entire url_value without quotes and replace it on this second url
https://api.telegram.org/bot/setWebhook?url=url_value&drop_pending_updates=True
Enter the second url with right bot and url_value in your web browser then press ENTER
Done!
i solve it by Change file access permissions file - set permissions file to 755
and second increase memory limit in php.ini file
A quick&dirty way is to get a temporary webhook here: https://webhook.site/ and
set your webhook to that (it will answer with a HTTP/200 code everytime, reseting your pending messages to zero)
I faced the same issue for my tele bot after user edited existing message. My bot receives update with editedMessage continuously, but update.hasMessage() was empty. As a result number of updates rocketly increased and my bot stack.
I solved this issue by adding handling for use case when message is missing - send 200 code:
public APIGatewayProxyResponseEvent handleRequest(APIGatewayProxyRequestEvent event, Context context) {
update = MAPPER.readValue(event.getBody(), Update.class);
if (!update.hasMessage()) {
return new APIGatewayProxyResponseEvent()
.withStatusCode(200) // -> !!!!!! return code 200
.withBody("message is missing")
.withIsBase64Encoded(false);
}
... ... ...
Ok, so I am using Node.js and Azure Blob Storage to handle some file uploads.
When a person uploads an image I would like to show them a thumbnail of the image. The upload works great and I have it stored in my blob.
I used this fine link (Generating Azure Shared Access Signatures with BlobService.getBlobURL() in Azure SDK for Node.js) to help me create this code to create a share access temporary url.
process.env['AZURE_STORAGE_ACCOUNT'] = "[MY_ACCOUNT_NAME]";
process.env['AZURE_STORAGE_ACCESS_KEY'] = "[MY_ACCESS_KEY]";
var azure = require('azure');
var blobs = azure.createBlobService();
var tempUrl = blobs.getBlobUrl('[CONTAINER_NAME]', "[BLOB_NAME]", { AccessPolicy: {
Start: Date.now(),
Expiry: azure.date.minutesFromNow(60),
Permissions: azure.Constants.BlobConstants.SharedAccessPermissions.READ
}});
This creates a url just fine.
Something like this : https://[ACCOUNT_NAME].blob.core.windows.net:443/[CONTAINER_NAME]/[BLOB_NAME]?st=2013-12-13T17%3A33%3A40Z&se=2013-12-13T18%3A33%3A40Z&sr=b&sp=r&sig=Tplf5%2Bg%2FsDQpRafrtVZ7j0X31wPgZShlwjq2TX22mYM%3D
The problem is that when I take the temp url and plug it into my browser it will only download the image rather than view it (in this case it is a simple jpg file).
This translates to my code that I can't seem to view it in an tag...
The link is right and downloads the right file...
Is there something I need to do to view the image rather than download it?
Thanks,
David
UPDATE
Ok, so I found this article:
http://social.msdn.microsoft.com/Forums/windowsapps/en-US/b8759195-f490-420b-a587-2bb614366ad2/embedding-images-from-blob-storage-in-ssrs-report-does-not-work
Basically it told me that wasn't setting the file type when uploading it so the browser didn't know what to do with it.
I used code from here: http://www.snip2code.com/Snippet/8974/NodeJS-Photo-Upload-with-Azure-Storage/
This allowed me to upload it correctly and it now views properly in the browser.
The issue I am having now is that when I put the tempUrl into an img tag I get this error:
Failed to load resource: the server responded with a status of 403 (Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.)
This is the exact same link that if I post it to my browser it works just fine...why can't I show it in an image tag?
UPDATE 2
Ok, so as a stupid test I put in a 7 second delay from when my page loads and when the img tag gets the source from the temp url. This seems to fix the problem (most of the time), but it is, obviously, a crappy solution even when it works...
At least this verifies that, because it works sometimes, my markup is at least correct.
I can't, for the life of me, figure out why a delay would make a bit of difference...
Thoughts?
UPDATE 3
Ok, based on a comment below, I have tried to set my start time to about 20 minutes in the past.
var start = moment().add(-20, 'm').format('ddd MMM DD YYYY HH:mm:ss');
var tempUrl = blobs.getBlobUrl(Container, Filename, { AccessPolicy: {
Start: start,
Expiry: azure.date.minutesFromNow(60),
Permissions: azure.Constants.BlobConstants.SharedAccessPermissions.READ
}});
I made my start variable the same format as the azure.date.minutesFromNow. It looks like this: Fri Dec 13 2013 14:53:58
When I do this I am not able to see the image even from the browser, much less the img tag. Not sure if I am doing something wrong there...
UPDATE 4 - ANSWERED
Thanks to the glorious #MikeWo, I have the solution. The correct code is below:
var tempUrl = blobs.getBlobUrl('[CONTAINER_NAME]', "[BLOB_NAME]", { AccessPolicy: {
Start: azure.date.minutesFromNow(-5),
Expiry: azure.date.minutesFromNow(45),
Permissions: azure.Constants.BlobConstants.SharedAccessPermissions.READ
}});
Mike was correct in that there seemed to be some sort of disconnect between the start time of the server and my localhost so I needed to set the start time in the past. In update 3 I was doing that, but Mike noticed that Azure does not allow the start and end time to be more than 60 minutes...so in update 3 I was doing -20 start and 60 end which is 80 minutes.
The new, successful way I have above makes the total difference 50 minutes and it works without any delay at all.
Thanks for taking the time Mike!
Short version: There is a bit of time drift that occurs in distributed systems, including in Azure. In the code that creates the SAS instead of doing a start time of Date.now(), set the start time to a minute or two in the past. Then you should be able to remove the delay.
Long version: The clock on the machine creating the signature and adding the Date.now might be a few seconds faster than the machines in BLOB storage. When the request to the URL is made immediately the BLOB service hasn't hit the "start time" yet of the BLOB and thus throws the 403. So, by setting the start time a few seconds in the past, or even the start of the current day if you want to cover a massive clock drift, you building in handling of the clock drift.
UPDATE: After some trial and error: Make sure that when creating an adhoc SAS it can't be longer than an hour. Setting the start time a few minutes in the past and then expiration 60 minutes in the future was too big. Making it a little in the past and then not quite an hour from then for expiration.
I query the view like this:
/db/_design/myviewname/_view/foo?key=%22ABC123%22
The result is the following:
{
total_rows: 3,
offset: 3,
rows: [ ]
}
All good.
Since no doc was found I'd like to throw a 404 from a show or list.
Is that possible?
According to the wiki, you can issue redirect responses via Show/List functions. As such, it is also possible to send out arbitrary HTTP status codes. (like 404)
function (head, req) {
start({ code: 404 });
}
I'm not sure if 404 would be the right choice here. It really means not found.
From the W3 HTTP/1.1 rfc2616:
10.4.5 404 Not Found
The server has not found anything matching the Request-URI. No indication is given of whether the condition is temporary or permanent. The 410 (Gone) status code SHOULD be used if the server knows, through some internally configurable mechanism, that an old resource is permanently unavailable and has no forwarding address. This status code is commonly used when the server does not wish to reveal exactly why the request has been refused, or when no other response is applicable.
There is another more appropriate response status code I think. 204 No Content which sounds more like what you really want to tell the client.
10.2.5 204 No Content
The server has fulfilled the request but does not need to return an entity-body, and might want to return updated metainformation. The response MAY include new or updated metainformation in the form of entity-headers, which if present SHOULD be associated with the requested variant.
If the client is a user agent, it SHOULD NOT change its document view from that which caused the request to be sent. This response is primarily intended to allow input for actions to take place without causing a change to the user agent's active document view, although any new or updated metainformation SHOULD be applied to the document currently in the user agent's active view.
The 204 response MUST NOT include a message-body, and thus is always terminated by the first empty line after the header fields.
Now to set a custom response header you simply specify it in the object passed to the start function, like this.
function(head, req) {
return { "code": 204 };
}
I have a similar situation to this question.
I have a custom sequential SharePoint workflow, deleoped in Visual Studio 2008. It is associated with an InfoPath form submitted to a form library. It is configured to automatically start when an item is created.
It works sometimes. Sometimes it just fails to start.
Just like the question linked above, I checked in the debugger, and the issue is that the InfoPath fields published as columns in the library are empty when the workflow fires. (I access the fields with workflowProperties.Item["fieldName"].) But there appears to be a race condition, as those fields actually show up in the library view, and if I terminate the failed workflow and restart it manually, it works fine!
After a lot of head-scratching and testing, I've determined that the workflow will start successfully if the user is running any version of IE on Windows XP, but it fails if the same user submits the same form data from a Vista or Windows 7 client machine.
Does anyone have any idea why this is happening?
I have used another solution which will only wait until InfoPath property is available (or max 60 seconds):
public SPWorkflowActivationProperties workflowProperties =
new SPWorkflowActivationProperties();
private void onOrderFormWorkflowActivated_Invoked(object sender, ExternalDataEventArgs e)
{
SPListItem workflowItem;
workflowItem = workflowProperties.List.GetItemById(workflowProperties.ItemId);
int waited = 0;
int maxWait = 60000; // Max wait time in ms
while (workflowItem["fieldName"] == null && (waited < maxWait))
{
System.Threading.Thread.Sleep(1);
waited ++;
workflowItem = workflowProperties.List.GetItemById(workflowProperties.ItemId);
}
// For testing: Write delay time in Workflow History Event
SPWorkflow.CreateHistoryEvent(
workflowProperties.Web,
workflowProperties.WorkflowId,
(int)SPWorkflowHistoryEventType.WorkflowComment,
workflowProperties.OriginatorUser, TimeSpan.Zero,
waited.ToString() + " ms", "Waiting time", "");
}
workflowProperties.Item will never get the InfoPath property in the code above.
workflowProperties.List.GetItemById(workflowProperties.ItemId) will after some delay.
This occurs due to the fact that Vista/7 saves InfoPath forms through WebDAV, however XP uses another protocol (sorry, can't remember at the time). SharePoint catches the "ItemAdded" event before the file is actually uploaded (that is, the item is already created, but file upload is currently in progress).
What you can do for a workaround is to add a dealay activity and wait for 10 seconds as the first thing in your workflow (will actually be longer than ten seconds due to the way workflows are built in SPPS). This way the upload will already have ended when you perform to read the item. To inform the users about what's happening, you can add a "logToHistoryList" activity before the delay.