Node-RED + DB2 - msg : string[18] "No response object" - node.js
So, I'm a beginner in Node-RED and need to make a simple API with DB2 queries through flows. I'm using node-red-contrib-db2 to accomplish that. The thing is, I managed to get the results to several payloads to the debugger node, either triggered by timestamp or HTTP Request. However, I can't get these results on HTTP Reply and can't find the reason. Is it a problem with the db2 plugin or just me?
Exported nodes below:
[{"id":"96197abb.fd4098","type":"http in","z":"b4aa8db5.217028","name":"","url":"/wastes","method":"get","upload":false,"swaggerDoc":"","x":150,"y":140,"wires":[["9affb306.caf7e"]]},{"id":"bda39d37.edb418","type":"http response","z":"b4aa8db5.217028","name":"","statusCode":"200","headers":{},"x":940,"y":100,"wires":[]},{"id":"41708443.e4670c","type":"inject","z":"b4aa8db5.217028","name":"","topic":"","payload":"","payloadType":"date","repeat":"","crontab":"","once":false,"onceDelay":0.1,"x":220,"y":40,"wires":[["22a6e217.ead65e"]]},{"id":"9d1e6783.eb246","type":"ibmdb","z":"b4aa8db5.217028","mydb":"3a218407.1cca74","name":"IOCDATA","x":560,"y":40,"wires":[["80e51c1b.23b378"],[]]},{"id":"80e51c1b.23b378","type":"debug","z":"b4aa8db5.217028","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","x":730,"y":40,"wires":[]},{"id":"22a6e217.ead65e","type":"function","z":"b4aa8db5.217028","name":"SQL Query","func":"msg.database = \"iocdata\";\nmsg.payload = \"select * from viseu.waste_view\";\nreturn msg;","outputs":1,"noerr":0,"x":390,"y":40,"wires":[["9d1e6783.eb246"]]},{"id":"4a6bd014.f39868","type":"ibmdb","z":"b4aa8db5.217028","mydb":"3a218407.1cca74","name":"IOCDATA","x":500,"y":140,"wires":[["bda39d37.edb418","74e28d3e.039be4"],[]]},{"id":"9affb306.caf7e","type":"function","z":"b4aa8db5.217028","name":"SQL Query","func":"msg.database = \"iocdata\";\nmsg.payload = \"select * from viseu.waste_view where id = 1\";\nreturn msg;","outputs":1,"noerr":0,"x":330,"y":140,"wires":[["4a6bd014.f39868"]]},{"id":"74e28d3e.039be4","type":"debug","z":"b4aa8db5.217028","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","x":950,"y":180,"wires":[]},{"id":"3a218407.1cca74","type":"IbmDBdatabase","z":"","host":"10.102.0.62","port":"50002","db":"iocdata"}]
This is an issue with the ibmdb node you are using - it is not reusing the received message when it sends its results. That means the msg.req and msg.res properties provided by the HTTP In node are not set on the message by the time it reaches the HTTP Response node. This means the response node doesn't not what request to respond to.
To work around the issue, one approach, which isn't ideal, is to store msg.req and msg.res in flow context using a Change node before the ibmdb node, and then copy them back onto the msg after the ibmdb node. This isn't ideal because it can only handle one request at a time.
It would be best to raise an issue against the ibmdb node.
I managed to reach success in my flow, at the cost of many workarounds and variable juggling. But it IS working now. Select count + select rows + join rows where msg.complete is set when count value is reached. Here is the code:
[{"id":"96197abb.fd4098","type":"http in","z":"b4aa8db5.217028","name":"","url":"/wastes","method":"get","upload":false,"swaggerDoc":"","x":90,"y":140,"wires":[["d5f42a96.83f688"]]},{"id":"bda39d37.edb418","type":"http response","z":"b4aa8db5.217028","name":"","statusCode":"200","headers":{},"x":980,"y":260,"wires":[]},{"id":"4a6bd014.f39868","type":"ibmdb","z":"b4aa8db5.217028","mydb":"3a218407.1cca74","name":"SELECT waste_view","x":360,"y":200,"wires":[["35f99a5a.c7f87e"],[]]},{"id":"9affb306.caf7e","type":"function","z":"b4aa8db5.217028","name":"SQL Query","func":"msg.database = \"iocdata\";\nmsg.payload = \"select count(*) from viseu.waste_view\";\n\nreturn msg;","outputs":1,"noerr":0,"x":170,"y":200,"wires":[["4a6bd014.f39868"]]},{"id":"74e28d3e.039be4","type":"debug","z":"b4aa8db5.217028","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","x":890,"y":380,"wires":[]},{"id":"d5f42a96.83f688","type":"change","z":"b4aa8db5.217028","name":"","rules":[{"t":"set","p":"req","pt":"flow","to":"req","tot":"msg"},{"t":"set","p":"res","pt":"flow","to":"res","tot":"msg"}],"action":"","property":"","from":"","to":"","reg":false,"x":260,"y":140,"wires":[["9affb306.caf7e"]]},{"id":"c3ebb136.aa8988","type":"change","z":"b4aa8db5.217028","name":"","rules":[{"t":"set","p":"req","pt":"msg","to":"req","tot":"flow"},{"t":"set","p":"res","pt":"msg","to":"res","tot":"flow"}],"action":"","property":"","from":"","to":"","reg":false,"x":800,"y":260,"wires":[["bda39d37.edb418"]]},{"id":"ca59ece2.844b3","type":"join","z":"b4aa8db5.217028","name":"","mode":"custom","build":"array","property":"payload","propertyType":"msg","key":"topic","joiner":"\\n","joinerType":"str","accumulate":false,"timeout":"","count":"","reduceRight":false,"reduceExp":"","reduceInit":"","reduceInitType":"","reduceFixup":"","x":630,"y":260,"wires":[["c3ebb136.aa8988","74e28d3e.039be4"]]},{"id":"35f99a5a.c7f87e","type":"function","z":"b4aa8db5.217028","name":"SQL Query","func":"msg.rowcount = msg.payload[1];\nmsg.database = \"iocdata\";\nmsg.payload = \"select * from viseu.waste_view\";// fetch first \" + msg.count[1] + \" rows only\";\n\nreturn msg;","outputs":1,"noerr":0,"x":550,"y":200,"wires":[["327a8ae.a8ce2f6"]]},{"id":"2666e2ba.41dc8e","type":"ibmdb","z":"b4aa8db5.217028","mydb":"3a218407.1cca74","name":"SELECT waste_view","x":800,"y":200,"wires":[["9008e06f.bf6d7"],[]]},{"id":"ec61a7f3.68cf8","type":"debug","z":"b4aa8db5.217028","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"count","x":650,"y":320,"wires":[]},{"id":"327a8ae.a8ce2f6","type":"change","z":"b4aa8db5.217028","name":"","rules":[{"t":"set","p":"rowcount","pt":"flow","to":"rowcount","tot":"msg"}],"action":"","property":"","from":"","to":"","reg":false,"x":670,"y":140,"wires":[["2666e2ba.41dc8e"]]},{"id":"90204f2d.8bafe8","type":"change","z":"b4aa8db5.217028","name":"","rules":[{"t":"set","p":"rowcount","pt":"msg","to":"rowcount","tot":"flow"}],"action":"","property":"","from":"","to":"","reg":false,"x":310,"y":320,"wires":[["6888cd0d.d00064"]]},{"id":"9008e06f.bf6d7","type":"counter","z":"b4aa8db5.217028","name":"","init":"0","step":"1","lower":"","upper":"","mode":"increment","outputs":"1","x":220,"y":260,"wires":[["90204f2d.8bafe8"]]},{"id":"6888cd0d.d00064","type":"function","z":"b4aa8db5.217028","name":"if rowcount === count","func":"if (msg.count === msg.rowcount) {\n msg.complete = true;\n}\n\nreturn msg;","outputs":1,"noerr":0,"x":440,"y":260,"wires":[["ca59ece2.844b3","ec61a7f3.68cf8","a63f6ad6.26f08"]]},{"id":"a63f6ad6.26f08","type":"debug","z":"b4aa8db5.217028","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"rowcount","x":660,"y":380,"wires":[]},{"id":"3a218407.1cca74","type":"IbmDBdatabase","z":"","host":"10.102.0.62","port":"50002","db":"iocdata"}]
EDIT 21/02/2018: the previous solution is not very good, because the counter saves its value mysteriously and I can't reset it as I wanted it. That makes the counter surpass the wished rowcount value. So, I had to make my own counter in a function node. New code below:
[{"id":"96197abb.fd4098","type":"http in","z":"b4aa8db5.217028","name":"","url":"/wastes","method":"get","upload":false,"swaggerDoc":"","x":90,"y":60,"wires":[["d5f42a96.83f688"]]},{"id":"bda39d37.edb418","type":"http response","z":"b4aa8db5.217028","name":"","statusCode":"200","headers":{},"x":720,"y":220,"wires":[]},{"id":"4a6bd014.f39868","type":"ibmdb","z":"b4aa8db5.217028","mydb":"3a218407.1cca74","name":"SELECT waste_view","x":740,"y":60,"wires":[["35f99a5a.c7f87e"],[]]},{"id":"9affb306.caf7e","type":"function","z":"b4aa8db5.217028","name":"SQL Query","func":"msg.database = \"iocdata\";\nmsg.payload = \"select count(*) from viseu.waste_view\";\n\nreturn msg;","outputs":1,"noerr":0,"x":530,"y":60,"wires":[["4a6bd014.f39868"]]},{"id":"74e28d3e.039be4","type":"debug","z":"b4aa8db5.217028","name":"","active":true,"tosidebar":true,"console":false,"tostatus":false,"complete":"payload","x":550,"y":280,"wires":[]},{"id":"d5f42a96.83f688","type":"change","z":"b4aa8db5.217028","name":"save req and res","rules":[{"t":"set","p":"req","pt":"flow","to":"req","tot":"msg"},{"t":"set","p":"res","pt":"flow","to":"res","tot":"msg"}],"action":"","property":"","from":"","to":"","reg":false,"x":290,"y":60,"wires":[["9affb306.caf7e"]]},{"id":"ca59ece2.844b3","type":"join","z":"b4aa8db5.217028","name":"","mode":"custom","build":"array","property":"payload","propertyType":"msg","key":"topic","joiner":"\\n","joinerType":"str","accumulate":false,"timeout":"","count":"msg.count","reduceRight":false,"reduceExp":"","reduceInit":"","reduceInitType":"","reduceFixup":"","x":390,"y":220,"wires":[["74e28d3e.039be4","c3ebb136.aa8988"]]},{"id":"35f99a5a.c7f87e","type":"function","z":"b4aa8db5.217028","name":"SQL Query","func":"msg.rowcount = msg.payload[1];\nmsg.database = \"iocdata\";\nmsg.payload = \"select * from viseu.waste_view\";\n\nreturn msg;","outputs":1,"noerr":0,"x":950,"y":60,"wires":[["327a8ae.a8ce2f6"]]},{"id":"2666e2ba.41dc8e","type":"ibmdb","z":"b4aa8db5.217028","mydb":"3a218407.1cca74","name":"SELECT waste_view","x":380,"y":140,"wires":[["90204f2d.8bafe8"],[]]},{"id":"327a8ae.a8ce2f6","type":"change","z":"b4aa8db5.217028","name":"save rowcount","rules":[{"t":"set","p":"rowcount","pt":"flow","to":"rowcount","tot":"msg"}],"action":"","property":"","from":"","to":"","reg":false,"x":160,"y":140,"wires":[["2666e2ba.41dc8e"]]},{"id":"90204f2d.8bafe8","type":"change","z":"b4aa8db5.217028","name":"get rowcount and count","rules":[{"t":"set","p":"rowcount","pt":"msg","to":"rowcount","tot":"flow"},{"t":"set","p":"count","pt":"msg","to":"count","tot":"flow"}],"action":"","property":"","from":"","to":"","reg":false,"x":630,"y":140,"wires":[["6888cd0d.d00064"]]},{"id":"6888cd0d.d00064","type":"function","z":"b4aa8db5.217028","name":"if count === rowcount","func":"//fix: msg.count ultrapassa msg.rowcount\nmsg.count = msg.count+1 || 1;\n\nif (msg.count === msg.rowcount) {\n msg.complete = true;\n msg.count = 0;\n}\n\nreturn msg;","outputs":1,"noerr":0,"x":880,"y":140,"wires":[["82ecfa98.9473d8"]]},{"id":"c3ebb136.aa8988","type":"change","z":"b4aa8db5.217028","name":"get req, res","rules":[{"t":"set","p":"req","pt":"msg","to":"req","tot":"flow"},{"t":"set","p":"res","pt":"msg","to":"res","tot":"flow"}],"action":"","property":"","from":"","to":"","reg":false,"x":550,"y":220,"wires":[["bda39d37.edb418"]]},{"id":"82ecfa98.9473d8","type":"change","z":"b4aa8db5.217028","name":"save count","rules":[{"t":"set","p":"count","pt":"flow","to":"count","tot":"msg"}],"action":"","property":"","from":"","to":"","reg":false,"x":210,"y":220,"wires":[["ca59ece2.844b3"]]},{"id":"3a218407.1cca74","type":"IbmDBdatabase","z":"","host":"10.102.0.69","port":"50002","db":"iocdata"}]
Related
Requests.post returns top 50 records only even after setting offset and limit
I am running query in CI_INFOOBJECTS to fetch all the webi documents present in root folder and subfolders. This query returns 70 records in Query Builder but when i am running it using requests.post, it gives me top 50 records only. I tried changing offset and limit but still returning same 50 records. Can anyone help me resolve this as this is the best solution that i found till now to get all the reports from folders and sub folders to update the source universe. folder_get = requests.get(bip_url + '/v1/cmsquery', headers=headers) folder_root = etree.fromstring(folder_get.text) Query_var = 'SELECT SI_ID,SI_NAME FROM CI_INFOOBJECTS WHERE SI_KIND = \'WEBI\' AND SI_ANCESTOR = 6526 ORDER BY SI_ID' folder_root[0].text = Query_var data1 = etree.tostring(folder_root) folder_post = requests.post(bip_url + '/v1/cmsquery?offset=51&limit=100', headers = headers, data = data1) folder_post.content
Try using page and pagesize instead of offset and limit. folder_post = requests.post(bip_url + '/v1/cmsquery?page=1&pagesize=100', headers = headers, data = data1) This should give you the 70 records that you expect.
Twitter API: How to make query keep running?
Novice programmer here seeking help. I already set up my code to my requirements to use the Twitter's premium API. SEARCH_TERM = '#AAPL OR #FB OR #KO OR #ABT OR #PEPCO' PRODUCT = 'fullarchive' LABEL = 'my_label' r = api.request('tweets/search/%s/:%s' % (PRODUCT, LABEL), {'query':SEARCH_TERM, 'fromDate':201501010000, 'toDate':201812310000}) However, when I run it I obtain the maximum number of tweets per search which is 500. My question is should I add to the query maxResults = 500? And how do I use the next parameter to keep the code running until all the tweets that correspond to my query are obtained?
To up the results from the default of 100 to 500, yes, add maxResults to the query like this: r = api.request('tweets/search/%s/:%s' % (PRODUCT, LABEL), { 'query':SEARCH_TERM, 'fromDate':201501010000, 'toDate':201812310000, 'maxResults':500 }) You can make successive queries to get more results by using the next parameter. But, even easier, you can let TwitterAPI do this for you by using the TwitterPager class. Here is an example: from TwitterAPI import TwitterAPI, TwitterPager SEARCH_TERM = '#AAPL OR #FB OR #KO OR #ABT OR #PEPCO' PRODUCT = 'fullarchive' LABEL = 'my_label' api = TwitterAPI(<consumer key>, <consumer secret>, <access token key>, <access token secret>) pager = TwitterPager(api, 'tweets/search/%s/:%s' % (PRODUCT, LABEL), { 'query':SEARCH_TERM, 'fromDate':201501010000, 'toDate':201812310000 }) for item in pager.get_iterator(): print(item['text'] if 'text' in item else item) This example will keep making successive requests with the next parameter until no tweets can be downloaded.
Use the count variable in a raw_query, for example: results = api.GetSearch( raw_query="q=twitter%20&result_type=recent&since=2014-07-19&count=100")
Mysql.connector Python, no result when connection is already used
I have a simple code which sends a message by http to another webapp and check that the message is well inserted in the database (2 times) So it is not this code which insert in the database (it is done in another app) SELECT_TABLE1_BY_ID_AND_DATE = "SELECT * FROM table1 WHERE table1.id = %s AND timedata = FROM_UNIXTIME(%s)" SELECT_TABLE2_BY_ID_AND_DATE = "SELECT * FROM table2 WHERE table2.id = %s AND timedata = FROM_UNIXTIME(%s)" try: conn = mysql.connector.connect(user=db['user'], password=db['password'], host=db['host'], port=db['port'], database="TEST", raise_on_warnings=True) cursor = conn.cursor() self.send1Message(msg1) # Send to HTTP Webapp cursor.execute(SELECT_TABLE1_BY_ID_AND_DATE, (idD, timing)) print(cursor.fetchall()) #1 self.send2Message(msg2) Send to HTTP Webapp cursor.execute(SELECT_TABLE2_BY_ID_AND_DATE, (idD, timing)) print(cursor.fetchall()) #2 except mysql.connector.Error as err: print("Something went wrong: {}".format(err)) If I use the same SQL connection between the 2 sendTable, only the first fetchAll returns data. (#1 prints data and #2 prints empty list). I tried also to close the connection after #1 and start another connection. It works for both (#1 prints data and #2 too). (I have to precise that my queries are correct and that the data is well insert in the database on time by the webapp). Is it a normal behaviour of a connection ? Thanks a lot!
Try 2 connections... conn1 = mysql.connector.connect(user=db['user'], password=db['password'], host=db['host'], port=db['port'], database="TEST", raise_on_warnings=True) conn2 = mysql.connector.connect(user=db['user'], password=db['password'], host=db['host'], port=db['port'], database="TEST", raise_on_warnings=True) cursor1 = conn1.cursor() self.send1Message(msg1) # Send to HTTP Webapp cursor1.execute(SELECT_TABLE1_BY_ID_AND_DATE, (idD, timing)) print(cursor1.fetchall()) #1 self.send2Message(msg2) Send to HTTP Webapp cursor2 = conn2.cursor() cursor2.execute(SELECT_TABLE2_BY_ID_AND_DATE, (idD, timing)) print(cursor2.fetchall()) #2
Is it possible to specify an id when creating an issue on GitLab?
I intend to transfer issues from Redmine to GitLab using this script https://github.com/sdslabs/redmine-to-gitlab/blob/master/issue-tranfer.py It works, but I would like to keep the issues ids during the transition. By default GitLab just starts from #1 and increases. I tried adding "newissue['iid']=issue['id']" and variations to the parameters, but apparently GitLab simply does not permit assigning an id. Anyone knows if there's a way? "issue" is the data acquired from redmine: newissue = {} newissue['id'] = pro['id'] newissue['title'] = issue['subject'] newissue['description'] = issue["description"] if 'assigned_to' in issue: auser = con.finduserbyname(issue['assigned_to']['name']) if(auser): newissue['assignee_id'] = auser['id'] print newissue if ('fixed_version' in issue): newissue['milestone_id'] = issue['fixed_version']['id'] newiss = post('/projects/' + str(pro['id']) + '/issues', newissue) and this is the "post" function def post( url, load = {}): load['private_token'] = conf.token r = requests.post(conf.base_url + url, params = load, verify = conf.sslverify) return r.json()
The API does not allow you to specify an issue ID at creation time. The ID is intended to be sequential. The only way you could potentially accomplish this task is to interact with the database directly. If you choose this route I caution you to be extremely careful, and have backups.
Azure - Website - WebJob - Active FTP Download
I am working with Windows Azure Websites and Web Jobs. I have a console application that I use to download an FTP file nightly. They recently switch from passive to active FTP. I do not have any control over this. The code attached was working for passive and works for active on my computer. However, it does not work when I add it to a webjob on Azure. In this code I am able to get the content length, so I am logging in correctly and I have the correct URL. Dim request As FtpWebRequest = DirectCast(FtpWebRequest.Create(strTempFTPUrl), FtpWebRequest) request.Method = WebRequestMethods.Ftp.GetFileSize Dim nc As New NetworkCredential(FTPUserName, FTPPassword) request.Credentials = nc request.UseBinary = True request.UsePassive = False request.KeepAlive = True request.Proxy = Nothing ' Get the result (size) Dim resp As FtpWebResponse = DirectCast(request.GetResponse(), FtpWebResponse) Dim contLen As Int64 = resp.ContentLength ' and now download the file request = DirectCast(FtpWebRequest.Create(strTempFTPUrl), FtpWebRequest) request.Method = WebRequestMethods.Ftp.DownloadFile request.Credentials = nc request.UseBinary = True request.UsePassive = False request.KeepAlive = True request.Proxy = Nothing resp = DirectCast(request.GetResponse(), FtpWebResponse) The error that I receive is this: The underlying connection was closed: An unexpected error occurred on a receive. This happens on the second "resp = DirectCast(request.GetResponse(), FtpWebResponse)" Does anyone have any suggestions on what I can do? Edit: This is not a VM so as far as I know I do not have control over the firewall. This is a standard website. Thank you very much!
I was with this same problem, I was able to solve by increasing the connection limit per point. By default it comes set to 2 I increased to 10 req.ServicePoint.ConnectionLimit = 10; If you have timeout problem, also change the properties timeout and readwritetimeout. Below is the link for a case similar to ours. How can I programmatically remove the 2 connection limit in WebClient