This is not a repeat question. I am desgning a routing algorithm that works in realtime. I know all about GTFS static feed but cant figure out my cities realtime. I want to understand how to parse and GTFS realtime feed using Python.
Link to Realtime set of my city is https://opendata.iiitd.edu.in/data/realtime/
I know a little about Protocol buffers and request library. Documentaion does not give the link to which API request should be placed.
What is the Url for this set which goes into requests.get?
There is a python library called gtfs_realtime_pb2, which can be used to parse the response of a request to the realtime feed and receive usefull output.
The main page for this library is here. There are also binding for other languages.
The main workflow is to initialize the
feed = gtfs_realtime_pb2.FeedMessage()
get the response from the feed with the package requests
response = requests.get(<url>, allow-redirects = True)
and parse it for example:
feed.ParseFromString(response.content)
feed.entity[int]
Related
I would like to send a POST request using the requests library in Python 3. The only difficulty I am having is in how to embed the urlencoded form part. The raw form looks like:
URLEncoded form:
data[type]: portion_send
data[attributes][style]: first_style
data[nodes][front][data][id]:1111
data[nodes][back][data][id]:1115
The best I have been able to do is to have:
data = {"data":{"attributes":{"style":"first_style"},"nodes":{"front":{"data":{"id":"1111"}},"back":{"data":{"id":"1115"}}},"type":"portion_send"}}
and to embed this inside a requests.post function with params = data, params = json.dumps(data), and data = json.dumps(data), each to no avail.
Does anyone have any ideas on how I can put this into a post request? The strange part is that the nesting jumps deeply and it is throwing me off. Thanks!
Python Requests Module
I need to know what the error message was to help you with the python requests module.
Using Linux or MacOS Curl`
If the above fails, you could try this:
os.execv('curl -d data=value http://YourWebsite.com')
It executes a separate process on your PC. If you run Windows, you could install curl from here: curl.haxx.se.
I have a simple app that saves and displays records off of a database,
#user_endpoints.get("/user/<id>")
def get_user(request, id):
dct = User.by_id(id)
if not dct:
return response.json({"Error": "Not Found"}, status=404)
return response.json(dct.to_dict(), status=200)
When it comes to displaying a user list something like the code below was sufficient,
#user_endpoints.get("/users")
def list_users(request):
dct = User.all()
template = template_env.get_template('user_list.html')
content = template.render(title='User List', users=dct)
return response.html(content)
The above uses Jinja2 but that is not important to me (this is not a real app). I am not clear on how to display a form for creating a new user and saving the submitted data, can someone provide a simple example for that?
There is no "one way" with Sanic to do that. Sanic does not know about forms. It completely depends on what you do in the frontend, how you encode the data. Are you sending JSON or is it "form-data" encoded? Maybe something completely different?
You would certainly use "POST" instead of "GET" like you did above. You can inspect request and find your data that's been sent from the frontend and then act on it. (Although nowadays you'd start with designing and implementing a proper (REST) API - usually based on JSON - and then use that. The word "form" does not appear here. It's just data.)
You could use Sanic-WTF
There is an example with Sanic-WTF + Jinja2 - https://bitbucket.org/voron-raven/synergy/src/1b8172a0bc61c5239c5ad2a2b9f064ed50ff81dd/views.py#lines-88
So the title is a little confusing I guess..
I have a script that I've been writing that will display some random data and other non-essentials when I open my shell. I'm using grequests to make my API calls since I'm using more than one URL. For my weather data, I use WeatherUnderground's API since it will offer active alerts. The alerts and conditions data are on separate pages. What I can't figure out is how to insert the appropriate name in the grequests object when it is making requests. Here is the code that I have:
URLS = ['http://api.wunderground.com/api/'+api_id+'/conditions/q/autoip.json',
'http://www.ourmanna.com/verses/api/get/?format=json',
'http://quotes.rest/qod.json',
'http://httpbin.org/ip']
requests = (grequests.get(url) for url in URLS)
responses = grequests.map(requests)
data = [response.json() for response in responses]
#json parsing from here
In the URL 'http://api.wunderground.com/api/'+api_id+'/conditions/q/autoip.json' I need to make an API request to conditions and alerts to retrieve the data I need. How do I do this without rewriting a fourth URLS string?
I've tried
pages = ['conditions', 'alerts']
URL = ['http://api.wunderground.com/api/'+api_id+([p for p in pages])/q/autoip.json']
but, as I'm sure some of you more seasoned programmers know, threw and exception. So how can I iterate through these pages, or will I have to write out both complete URLS?
Thanks!
Ok I was actually able to figure out how to call each individual page within the grequests object by using a simple for loop. Here is the the code that I used to produced the expected results:
import grequests
pages = ['conditions', 'alerts']
api_id = 'myapikeyhere'
for p in pages:
URLS = ['http://api.wunderground.com/api/'+api_id+'/'+p+'/q/autoip.json',
'http://www.ourmanna.com/verses/api/get/?format=json',
'http://quotes.rest/qod.json',
'http://httpbin.org/ip']
#create grequest object and retrieve results
requests = (grequests.get(url) for url in URLS)
responses = grequests.map(requests)
data = [response.json() for response in responses]
#json parsing from here
I'm still not sure why I couldn't figure this out before.
Documentation for the grequests library here
UPDATE: See MarkLogic 8 - Stream large result set to a file - JavaScript - Node.js Client API for someone's answer on how to do this in Javascript. This question is specifically asking about XQuery.
I have a web application that consumes rest services hosted in node.js.
Node simply proxies the request to XQuery which then queries MarkLogic.
These queries already have paging setup and work fine in the normal case to return a page of data to the UI.
I need to have an export feature such that when I put a URL parameter of export=all on a request, it doesn't lookup a page anymore.
At that point it should get the whole result set, even if it's a million records, and save it to a file.
The actual request needs to return immediately saying, "We will notify you when your download is ready."
One suggestion was to use xdmp:spawn to call the XQuery in the background which would save the results to a file. My actual HTTP request could then return immediately.
For the spawn piece, I think the idea is that I run my query with different options in order to get all results instead of one page. Then I would loop through the data and create a string variable to call xdmp:save with.
Some questions, is this a good idea? Is there a better way? If I loop through the result set and it does happen to be very large (gigabytes) it could cause memory issues.
Is there no way to directly stream the results to a file in XQuery?
Note: Another idea I had was to intercept the request at the proxy (node) layer and then do an xdmp:estimate to get the record count and then loop through querying each page and flushing it to disk. In this case I would need to find some way to return my request immediately yet process in the background in node which seems to have some ideas here: http://www.pubnub.com/blog/node-background-jobs-async-processing-for-async-language/
One possible strategy would be to use a self-spawning task that, on each iteration, gets the next page of the results for a query.
Instead of saving the results directly to a file, however, you might want to consider using xdmp:http-post() to send each page to a server:
http://docs.marklogic.com/xdmp:http-post?q=xdmp:http-post&v=8.0&api=true
In particular, the server could be a Node.js server that appends each page as it arrives to a file or any other datasink.
That way, Node.js could handle the long-running asynchronous IO with minimal load on the database server.
When a self-spawned task hits the end of the query, it can again use an HTTP request to notify Node.js to close the file and report that the export is finished.
Hping that helps,
I'm accessing GMail via IMAP using OAuth2 authentication and Zend_Mail_Protocol_Imap.
It all works great.
What I need to do is present emails in thread form just like the GMail interface. Google make this really easy because they have an X-GM-THRID header that links a conversation with a 64-bit unsigned integer.
My problem is: when presented with a single email, how do I find out what X-GM-THRID it belongs to?
First off Google says that there is a server extension X-GM-EXT-1 which is active. You can check it is there using the CAPABILITY command (and I have).
All the information suggests that if this is active then the X-GM-THRID will simply be returned as a header, but it isn't.
Perhaps I need to ask Google to return it via the fetch command. Google does describe a simple fetch process here:
https://developers.google.com/google-apps/gmail/imap_extensions
My code is sending TAG5 FETCH 3673 (FLAGS RFC822.HEADER X-GM-THRID) but the headers do not include an entry for X-GM-THRID.
I've even simplified it to TAG6 FETCH 3673 (X-GM-THRID) to be exactly as described in the google example. In this case no headers are returned.
I'm not massively familiar with IMAP commands and I'm not sure if Zend_Mail_Protocol_Imap is abstracting some handling which means this header is being removed.
But I do know that this is driving me mad.
Am I missing something? Is it not a header?
Okay, so it looks like it is not a header. It is an attribute in the IMAP command and response.
The standard fetch command sent by Zend_Mail_Protocol_Imap is "TAG5 FETCH 3673 (FLAGS RFC822.HEADER)"
The code that handles the response only expects to be dealing with 'FLAGS' and 'RFC822.HEADER'. It passes this information to a Zend_Mail_Message object which extends Zend_Mail_Part.
Zend_Mail_Part parses information about flag. It also parses the header.
The additional 'X-GM-THRID' attribute that I added does actually get a response. but since it is not passed back to Zend_Mail_Message there is no way for me to use it. It gets lost in the ether (at around line 171 of Zend_Mail_Storage_Imap in my Zend Library to be exact).
So I've hacked the core... Zend_Mail_Storage_Imap::getMessage now expects $data['X-GM-THRID'] and passes it to the constructor Zend_Mail_Part. And I now have a method Zend_Mail_Part::getXGmThrid which solves all my problems. I'll obviously refactor them into my own classes extending Zend_Mail_Storage_Imap and Zend_Mail_Part in the not too distant... but for now I know this works.