I have a small form interface where users can add the amount of money they want to load into their account. Users have to load a minimum amount of money. So I need to reject if users put an amount less then certain amount (ie. 50 USD).
I have js validation on place for the form for minimum load. Users put the desired amount then click checkout. After the checkout process stripe js api returns with a token. Token with other data (like amount, currency) are sent to server.
Now I need to validate the amount in the server. I can check the amount and print error message if it does not validate. The token that stripe js api created will remain unused if the validation is rejected.
My question is, what are the problems that can arise if I keep the token unused?
So actually, you don't need to validate on your server. If you attempt to call the Create a Charge API Endpoint with a value less than the minimum (in any currency) Stripe will simply raise an error (400 Bad Request).
Example in Python:
>>> stripe.Charge.create(amount=49, currency="usd", source="tok_visa")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "~/lib/python2.7/site-packages/stripe/api_resources/abstract/createable_api_resource.py", line 17, in create
response, api_key = requestor.request('post', url, params, headers)
File "~/lib/python2.7/site-packages/stripe/api_requestor.py", line 153, in request
resp = self.interpret_response(rbody, rcode, rheaders)
File "~/lib/python2.7/site-packages/stripe/api_requestor.py", line 365, in interpret_response
self.handle_error_response(rbody, rcode, resp.data, rheaders)
File "~/lib/python2.7/site-packages/stripe/api_requestor.py", line 178, in handle_error_response
raise err
stripe.error.InvalidRequestError: Request req_xxx: Amount must be at least 50 cents
Related
I'm getting this error in my node service - and I know why it's occurring, it was a long debugging process, but essentially some clients are invoking my API with a Content-Length header that mismatches the length of the actual body (the Content-Length header is larger than it should be or the body is being truncated due to poor network conditions). There's not much I can do on the server-side to fix this issue, so I want to suppress these errors since they are setting off a lot of monitors, and creating too much noise along with majorly affecting latency statistics (these failed requests take 5 minutes each for some reason)
BadRequestError: request aborted
File "/var/app/node_modules/raw-body/index.js", line 231, in IncomingMessage.onAborted
done(createError(400, 'request aborted', {
File "node:events", line 390, in IncomingMessage.emit
File "node:domain", line 537, in IncomingMessage.emit
File "node:_http_incoming", line 179, in IncomingMessage._destroy
File "node:internal/streams/destroy", line 102, in _destroy
I've done some digging, and while other people online correctly identify the error and its source, nothing gives an example of code that successfully suppresses it and stops the error from being thrown.
Additionally, I would like to find a way to shorten the time express waits before throwing this error/giving up on the request. I set envoy to have a timeout of 60 seconds on this service, and it should be killing off the request after 60 seconds, but express is hanging onto it for 5 minutes and I cannot seem to figure out why. The 0.3% of requests that are throwing this error are currently about 50% of the compute time for my service.
I've included snippets of what I believe is the relevant middleware here (left a bunch of stuff out like sentry integrations, datadog integrations, etc.) - if anything else would be helpful, let me know and I will add it
const app = express()
...
app.use(express.json({ limit: '1mb' }));
app.use(helmet());
...
app.listen(process.env.APP_PORT ?? 4444)
TLDR; How do I suppress the error above so it fails silently, and how do I shorten the timeout so it fails sooner?
I have created a module in odoo 12 which allows portal users to manage their timesheet.
from the controller all the available features of module, I used sudo() so that the portal user does not get any access right issue.
when creating a new timesheet controller directly calls the create() function, when delete it calls unlink() but when the user wants to edit the timesheet, I redirect the user to another page, on that page, there is an edit form, but it shows me 403 forbidden error message when the portal user navigates to that page.
the issue occurs only if I create a new portal user, it allows Joel Willis who is already in odoo.
I have added sudo() in that edit timesheet template too, but it did not work.
like this..
class EditTimesheet(http.Controller):
#http.route(['/edit_timesheet/<model("account.analytic.line"):timesheet>'], type='http', auth="public", website=True)
def _edit_timesheet(self, timesheet, category='', search='', **kwargs):
self.sudo().edit_timesheet(timesheet, category='', search='', **kwargs)
def edit_timesheet(self, timesheet, category='', search='', **kwargs):
return request.render("timesheet_module.edit_timesheet",{'timesheet':timesheet.sudo()})
error in logger.
Traceback (most recent call last):
File "/home/milan/workspace/odoo/odoo12/odoo/api.py", line 1049, in get
value = self._data[key][field][record._ids[0]]
KeyError: 6
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/milan/workspace/odoo/odoo12/odoo/fields.py", line 1012, in __get__
value = record.env.cache.get(record, self)
File "/home/milan/workspace/odoo/odoo12/odoo/api.py", line 1051, in get
raise CacheMiss(record, field)
odoo.exceptions.CacheMiss: ('account.analytic.line(6,).display_name', None)
odoo.exceptions.AccessError: ('The requested operation cannot be completed due to security restrictions. Please contact your system administrator.\n\n(Document type: Analytic Line, Operation: read) - (Records: [6], User: 8)', None)
When you use <model("account.analytic.line"):timesheet> in the route I believe it checks the permissions for the model/logged in user when the route is hit. So it is throwing the error before you even get to the sudo call. I would recommend taking in an accout.analytic.line id instead (make sure you pass in just id then) and combine your 2 routes into 1 like such...
#http.route(['/edit_timesheet/<int:timesheet_id>'], type='http', auth="public", website=True)
def edit_timesheet(self, timesheet_id, category='', search='', **kwargs):
timsheet = env['account.analytic.line'].sudo().browse(timesheet_id)
return request.render("timesheet_module.edit_timesheet",{'timesheet':timesheet})
Im using Python 3.6 and PRAW 6, trying to do a simple bot with subreddit filters that cross posts hot submissions into another subreddit . However, I cant seem to set up my subreddit filter properly when initiating the script.
This is pretty annoying because it has worked before. I read that 403 HTTP response was authentication issues but that doesnt make sense. I can individually add subreddits into the filter and I even managed to iteratively remove subreddits from the saved subreddit filter list which I had set up beforehand.
I have a sub_filter.txt file with the list of subreddits I would like to filter out containing strings like so:
tifu
jokes
worldnews
Then,
with open("sub_filter.txt") as q:
subreddit_filter = subreddit_filter.split("\n")
subreddit_filter = list(filter(None, subreddit_filter))
subreddit_filter = list(subreddit_filter)
for i in subreddit_filter:
filter_list = reddit.subreddit('all').filters.add(i)
for subreddit in reddit.subreddit('all').filters:
print(subreddit)
This is the error message I get when it reaches the code to iteratively add subreddits into the subreddit filter
for i in subreddit_filter:
filter_list = reddit.subreddit('all').filters.add(i)
Traceback (most recent call last):
File "C:\Users\Qixuan\Desktop\Programming programmes\Reddit\weweet-bot\weweet-code.py", line 23, in <module>
reddit.subreddit('all').filters.add(i)
File "C:\Users\Qixuan\AppData\Local\Programs\Python\Python36\lib\site-packages\praw\models\reddit\subreddit.py", line 974, in add
"PUT", url, data={"model": dumps({"name": str(subreddit)})}
File "C:\Users\Qixuan\AppData\Local\Programs\Python\Python36\lib\site-packages\praw\reddit.py", line 577, in request
method, path, data=data, files=files, params=params
File "C:\Users\Qixuan\AppData\Local\Programs\Python\Python36\lib\site-packages\prawcore\sessions.py", line 185, in request
params=params, url=url)
File "C:\Users\Qixuan\AppData\Local\Programs\Python\Python36\lib\site-packages\prawcore\sessions.py", line 130, in _request_with_retries
raise self.STATUS_EXCEPTIONS[response.status_code](response)
prawcore.exceptions.Forbidden: received 403 HTTP response
Any help is greatly appreaciated! Im also not very proficient at coding so please be forgiving!
Welcome to StackOverflow!
I read that 403 HTTP response was authentication issues
403 does indeed indicate an authentication issue. Try adding the following line immediately after reddit is defined:
print(reddit.user.me())
If you are properly authenticated, this will print your username. Otherwise, you need to fix your credentials (username, password, client ID, client secret, user agent).
I have a subscription to a topic using filters in Azure Service Bus develop with Python 3.x and when I wait the information sent to that topic (information that pass the filter) I can not receive it.
I need to create a daemon that is always listening and that when I receive that information I send it to an internal service of my application, so the receiver is running in a thread inside a loop While True
The code I use to receive the messages is as follows:
while True:
msg = bus_service.receive_subscription_message(topic_name, subscription_name, peek_lock=True)
print('Mensaje Recibido-->',msg.body)
data = msg.body
send_MyApp(data.decode("utf-8"))
msg.delete()
What I get when I run it is the next information:
Message --> None
Exception in thread Thread-1:
Traceback (most recent call last):
File "..\AppData\Local\Programs\Python\Python36-32\lib\threading.py", line 916, in _bootstrap_inner
self.run()
File "..\AppData\Local\Programs\Python\Python36-32\lib\threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "../Python/ServiceBusSuscription/receive_subscription.py", line 19, in receive_subscription
send_MyApp(data.decode("utf-8"))
AttributeError: 'NoneType' object has no attribute 'decode'
If I run the receiver out of the thread, this is the error message it shows (again when the timeout is skipped, which timeout I should delete because in a daemon that is waiting it can not skip). Basically, it is the same error:
Traceback (most recent call last):
File "../Python/ServiceBusSuscription/receive_subscription.py", line 76, in <module>
main()
File "../Python/ServiceBusSuscription/receive_subscription.py", line 72, in main
demo(bus_service)
File "../Python/ServiceBusSuscription//receive_subscription.py", line 25, in demo
print(msg.body.decode("utf-8"))
AttributeError: 'NoneType' object has no attribute 'decode'
I do not receive the information I'm waiting and also skip a Service Bus timeout (which I have not programmed).
Can anybody help me? Microsoft's documentation does not help much, really.
Thanks in advance
UPDATE
I think that the problem is from Azure Service Bus and the subscriptions and filters. Actually, I have 23 filters and I think Azure Service Bus only works with 1 subscription :( But I'm not sure about this point.
I tried to reproduce your issue successfully, then I discovered that it happended if there is no message in your topic.
So you need to check the value or type of msg.body whether be None or type(None) before decode the bytes of msg.body, as below.
data = msg.body
if data != None:
# Or if type(data) == type(b''):
send_MyApp(data.decode("utf-8"))
msg.delete()
else:
...
Hope it helps.
I'm trying to get only self media.
/users/self/feed --> this return user follow feeds
/users/self/media/recent --> return only recent post
I want to get all self media with this parameters, count=10&max_id=?
Use this API with your {user-id}:
https://api.instagram.com/v1/users/{user-id}/media/recent/?client_id=YOUR-CLIENT_ID
this call will return max of 20 pics, then use the pagination.next_url in the API response to make the next API call to get the next 20 photos and so on...