I am trying to insert rows into my database. Establishing a connection to the database is successful. When I try to insert my desired rows I get an error in the sql. The error appears to be coming from my variable "network_number". I am running nested for loops to iterate through the network number ranges from 1.1.1 - 254.254.254 and adding each unique IP to the database. The network number is written as a string so should the column for the network number be set to VARCHAR or TEXT to include full stops/period? The desired output is to populate my database table with each network number. You can find the sql query assigned to the variable sql_query.
def populate_ip_table(ip_ranges):
network_numbers = ["", "", ""]
information = "Populating the IP table..."
total_ips = (len(ip_ranges) * 254**2)
complete = 0
for octet_one in ip_ranges:
network_numbers[0] = str(octet_one)
percentage_complete = round(100 / total_ips * complete, 2)
information = f"{percentage_complete}% complete"
output_information(information)
for octet_two in range(1, 254 + 1):
network_numbers[1] = str(octet_two)
for octet_three in range(1, 254 + 1):
network_numbers[2] = str(octet_three)
network_number = ".".join(network_numbers)
complete += 1
sql_query = f"INSERT INTO ip_scan_record (ip, scanned_status, times_scanned) VALUES ({network_number}, false, 0)"
execute_sql_statement(sql_query)
information = "100% complete"
output_information(information)
Output
[ * ] Connecting to the PostgreSQL database...
[ * ] Connection successful
[ * ] Executing SQL statement
[ ! ] syntax error at or near ".50"
LINE 1: ...rd (ip, scanned_status, times_scanned) VALUES (1.1.50, false...
^
As stated by the Docs:
There is no performance difference among these three types, apart from increased storage space when using the blank-padded type, and a few extra CPU cycles to check the length when storing into a length-constrained column. While character(n) has performance advantages in some other database systems, there is no such advantage in PostgreSQL; in fact character(n) is usually the slowest of the three because of its additional storage costs. In most situations text or character varying should be used
Postgresql Docs
I think you need to use VARCHAR, due to the small varying length of your ip-string. while, text is effectively avarchar (no limit), but it may have some problems related to indexing if a record with compressed size of greater than 2712 is tried to be inserted.
Actually your problem is, you need to put an extra single qoutes on network_number. To give you a string when inserting the value in postgresql.
To prove this try insert {network_number} as this:
network_number = "'" + ".".join(network_numbers) + "'"
sql_query = f"INSERT INTO ip_scan_record (ip, scanned_status, times_scanned) VALUES ({network_number}, false, 0)"
OR:
sql_query = f"INSERT INTO ip_scan_record (ip, scanned_status, times_scanned) VALUES ('{network_number}', false, 0)"
You could also, used inet dataType, which will save you this hassle.
As stated by Docs:
PostgreSQL offers data types to store IPv4, IPv6, and MAC addresses. It is better to use these types instead of plain text types to store network addresses, because these types offer input error checking and specialized operators and functions.
PostgreSQL: Network Address Types
Related
I am trying to create a virtual table in HANA based on a remote system table view.
If I run it at the command line using hdbsql
hdbsql H00=> create virtual table HanaIndexTable at "SYSRDL#CG_SOURCE"."<NULL>"."dbo"."sysiqvindex"
0 rows affected (overall time 305.661 msec; server time 215.870 msec)
I am able to select from HanaIndexTable and get results and see my index.
When I code it in python, I use the following command:
cursor.execute("""create virtual table HanaIndexTable1 at SYSRDL#CG_source.\<NULL\>.dbo.sysiqvindex""")
I think there is a problem with the NULL. But I see in the output that the escape key is doubled.
self = <hdbcli.dbapi.Cursor object at 0x7f02d61f43d0>
operation = 'create virtual table HanaIndexTable1 at SYSRDL#CG_source.\\<NULL\\>.dbo.sysiqvindex'
parameters = None
def __execute(self, operation, parameters = None):
# parameters is already checked as None or Tuple type.
> ret = self.__cursor.execute(operation, parameters=parameters, scrollable=self._scrollable)
E hdbcli.dbapi.ProgrammingError: (257, 'sql syntax error: incorrect syntax near "\\": line 1 col 58 (at pos 58)')
/usr/local/lib/python3.7/site-packages/hdbcli/dbapi.py:69: ProgrammingError
I have tried to run the command without the <> but get the following error.
hdbcli.dbapi.ProgrammingError: (257, 'sql syntax error: incorrect syntax near "NULL": line 1 col 58 (at pos 58)')
I have tried upper case, lower case and escaping. Is what I am trying to do impossible?
There was an issue with capitalization between HANA and my remote source. I also needed more escaping rather than less.
When I try to retreive my table in informix with ifxpy package, I get this error:
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
<ipython-input-6-b0557e7f099b> in <module>
16 while dictionary != False:
17 tph.append(dictionary)
---> 18 dictionary = IfxPy.fetch_assoc(stmt)
19 print(pd.DataFrame(tph))
Exception: [Informix][Informix ODBC Driver]Invalid string or buffer length. SQLCODE=-11071
This is my code:
import IfxPy
import pandas as pd
ConStr = "SERVER=informix1;DATABASE=ir_fmois;HOST=127.0.0.1;SERVICE=9092;UID=informix;PWD=1234;"
# netstat -a | findstr 9088
try:
# netstat -a | findstr 9088
conn = IfxPy.connect( ConStr, "", "")
except Exception as e:
print ('ERROR: Connect failed')
print ( e )
quit()
sql = "SELECT * FROM oih"
stmt = IfxPy.exec_immediate(conn, sql)
dictionary = IfxPy.fetch_assoc(stmt)
tph=[]
while dictionary != False:
tph.append(dictionary)
dictionary = IfxPy.fetch_assoc(stmt)
print(pd.DataFrame(tph))
When I print the dataframe I see that I got only the first 4 rows.
I tried also this code and i doesn't trhow any exception but it returns also the first 4 rows of my table:
import IfxPyDbi as dbapi2
ConStr = "SERVER=informix1;DATABASE=ir_fmois;HOST=127.0.0.1;SERVICE=9092;UID=informix;PWD=1234;"
conn = dbapi2.connect( ConStr, "", "")
cur = conn.cursor()
sql = "SELECT * FROM oih"
rows = cur.fetchall()
len(rows)
>>>4
EDIT
I tried importing columns one by one, and the same error has occured when I tried to select a byte (blob) column (with text data type). This column has emty values in the first 4 rows but the fifth wasn't empty, and I think this is why the error occured in line 5.
I would really appreciate if anyone has any idea on how to solve this.
The finderr command reports:
$ finderr -11071
-11071 Invalid string or buffer length.
This Informix CLI error code is the same as SQLSTATE value S1090. The
following functions can return this error code: SQLBindCol(),
SQLBindParameter(), SQLBrowseConnect(), SQLColAttributes(),
SQLColumnPrivileges(), SQLColumns(), SQLConnect(), SQLDataSources(),
SQLDescribeCol(), SQLDriverConnect(), SQLDrivers(), SQLExecDirect(),
SQLExecute(), SQLForeignKeys(), SQLGetCursorName(), SQLGetData(),
SQLGetInfo(), SQLNativeSql(), SQLPrepare(), SQLPrimaryKeys(),
SQLProcedureColumns(), SQLProcedures(), SQLPutData(), SQLSetCursorName(),
SQLSpecialColumns(), SQLStatistics(), SQLTablePrivileges(), and SQLTables().
The value specified for the argument cbValueMax is less than zero.
Supply a value for the argument cbValueMax that is zero or greater.
For all functions, an argument that specified a string or buffer length, such
as cbCursor, cbConnStrIn, or cbSqlStr, had one or more of the following
problems: (1) It was less than 0, (2) It was less than 0 but not equal to
SQL_NTS or SQL_NULL_DATA, (3) It was less than 0 but the corresponding
pointer was not a null pointer, (4) It was equal to 1, or (5) It was too
large. Set the string or buffer length to a valid value.
Additionally for SQLExecDirect() and SQLExecute(), a parameter value that was
set with SQLBindParameter() had one of the following problems: (1) It was a
null pointer, and the parameter length was not 0, SQL_NULL_DATA,
SQL_DATA_AT_EXEC, or less than or equal to SQL_LEN_DATA_AT_EXEC_OFFSET, or
(2) It was not a null pointer, and the parameter length was less than 0 but
was not SQL_NTS, SQL_NULL_DATA, SQL_DATA_AT_EXEC, or less than or equal to
SQL_LEN_EXEC_DATA_AT_EXEC_OFFSET. Set the parameter value to a valid value.
$
Since your Python code is not calling any of those functions directly, that suggests there is a bug in the ifxpy driver. It is probably calling one of the listed functions with an incorrect argument.
As such, you should probably report it as an 'issue' on the GitHub site for the driver: https://github.com/OpenInformix/IfxPy
I’m new to python and I need to get connected to “Kibana” via python. we’re using Kibana 7.4.1. The requirement is to get them just the count (hits).
Due to some restrictions, I need to use Python 3.6 only. I’ve added the “ElasticSearch” & “ElasticSearch-dsl” library.
I’m able to get connected to the Kibana via the client, but I’m getting the wrong hits count.
Code:
from elasticsearch import Elasticsearch
from elasticsearch_dsl import MultiSearch, Search
from elasticsearch_dsl.query import QueryString, Range, SimpleQueryString
es = Elasticsearch(['host2', 'host2'], http_auth=('usr', 'pass'), port=9200)
s = Search(using=es, index='c*')
s.filter(SimpleQueryString(query="tags:prod AND severity:INFO AND service: finder AND msg:* is processed"))
s.filter(Range(** {'#timestamp': {'gte': 'now-5m', 'lt': 'now'}}))
response = s.execute()
print("Got %d Hits:" % response['hits']['total']['value']) # Always coming as 1000 so this is wrong
Can I get some help with this, please?
First of all a little clarification. You are connecting to Elasticsearch and not Kibana (Kibana is a client, like the program you are writing).
You are receiving always 10000 as result, because your index has more than 10000 hits. It is a documented feature. Indeed, since the count computation is expensive in the general case it is performed only when needed. In order to obtain the right number of results you have two possibilities
to set the query parameter track_total_hits to true
use the count API.
track_total_hits
You can add this extra parameter to the search object as reported here as follows:
s = Search(using=es, index='c*')
s = s.extra(track_total_hits=True)
<the-rest of your code>
Count API approach
Instead of invoking the execute() function, you can simply use the count() function:
s = Search(using=es, index='c*')
s.filter(SimpleQueryString(query="tags:prod AND severity:INFO AND service: finder AND msg:* is processed"))
s.filter(Range(** {'#timestamp': {'gte': 'now-5m', 'lt': 'now'}}))
response = s.cpunt()
print("Got %d Hits:" % response)
Kind regards
Scenario
I want to send data to an MQTT Broker (Cloud) by querying measurements from InfluxDB.
I have a field in the schema which is called status. It can either be 1 or 0. status=0 indicated that series has not been sent to the cloud. If I get an acknowlegdment from the MQTT Broker then I wish to rewrite the query back into the database with status=1.
As mentioned in FAQs for InfluxDB regarding Duplicate data If the information has the same timestamp as the previous query but with a different field value => then the update field will be shown.
In order to test this I created the following:
CREATE DATABASE dummy
USE dummy
INSERT meas_1, type=t1, status=0,value=123 1536157064275338300
query:
SELECT * FROM meas_1
provides
time status type value
1536157064275338300 0 t1 234
now if I want to overwrite the series I do the following:
INSERT meas_1, type=t1, status=1,value=123 1536157064275338300
which will overwrite the series
time status type value
1536157064275338300 1 t1 234
(Note: this is not possible via Tags currently in InfluxDB)
Usage
Query some information using the client with "status"=0.
Restructure JSON to be sent to the cloud
Send the information to cloud
If successful then write the output from Step 1. back into the DB but with status=1.
I am using the InfluxDBClient Python3 to create the Application (MQTT + InfluxDB)
Within the write_points API there is a parameter which mentions batch_size which require int as input.
I am not sure how can I use this with the Application that I want. Can someone guide me with this or with the Schema of the DB so that I can upload actual and non-redundant information to the cloud ?
The batch_size is actually the length of the list of the measurements that needs to passed to write_points.
Steps
Create client and query from measurement (here, we query gps information)
client = InfluxDBClient(database='dummy')
op = client.query('SELECT * FROM gps WHERE "status"=0', epoch='ns')
Make the ResultSet into a list:
batch = list(op.get_points('gps'))
create an empty list for update
updated_batch = []
parse through each measurement and change the status flag to 1. Note, default values in InfluxDB are float
for each in batch:
new_mes = {
'measurement': 'gps',
'tags': {
'type': 'gps'
},
'time': each['time'],
'fields': {
'lat': float(each['lat']),
'lon': float(each['lon']),
'alt': float(each['alt']),
'status': float(1)
}
}
updated_batch.append(new_mes)
Finally dump the points back via the client with batch_size as the length of the updated_batch
client.write_points(updated_batch, batch_size=len(updated_batch))
This overwrites the series because it contains the same timestamps with status field set to 1
can anyone enlighten me why the following modelica code generates an error on OpenModelica 1.12.0? If I remove the last two connect equations, it works fine.
class A
Conn cc[3];
Real a(start=0,fixed=true);
Real b(start=0,fixed=true);
Real c(start=0,fixed=true);
equation
der(a) = 1;
der(b) = 2;
der(c) = 3;
connect(a,cc[1].v);
connect(b,cc[2].v); // Remove this to make it work
connect(c,cc[3].v); // Remove this to make it work
end A;
The expandable connector cc is empty:
expandable connector Conn
end Conn;
The code above generates error on OpenModelica 1.12.0:
[1] 15:07:44 Symbolic Error
Too many equations, over-determined system. The model has 6 equation(s) and 4 variable(s).
[2] 15:07:44 Symbolic Warning
[A: 11:3-11:21]: Equation 5 (size: 1) b = cc[2].v is not big enough to solve for enough variables.
Remaining unsolved variables are:
Already solved: b
Equations used to solve those variables:
Equation 2 (size: 1): der(b) = 2.0
[3] 15:07:44 Symbolic Warning
[A: 12:3-12:21]: Equation 6 (size: 1) c = cc[3].v is not big enough to solve for enough variables.
Remaining unsolved variables are:
Already solved: c
Equations used to solve those variables:
Equation 3 (size: 1): der(c) = 3.0
Basically, I want to have an array of expandable connectors which I can add different type of variables as needed.
Edit 18/08/2018
Regarding I can only connect the "connectors" to an expandable connector, actually I see the modelica spec 3.4 doc says:
All components in an expandable connector are seen as connector instances even if they are not declared as
such [i.e. it is possible to connect to e.g. a Real variable].
So it seems I can connect Real variables to an expandable connector in OpenModelica however, I get an error in JModelica:
Error at line 13, column 11, in file 'A.mo':
Connecting to an instance of a non-connector type is not allowed
Also I can connect Real variables to normal (non-expandable) connectors as well in OpenModeica, but again this is not allowed in JModelica. So tools interpret the language specs differently!
You cannot connect Real variables to the expandable connector, it needs to be connectors. But somehow that doesn't work either, seems to be a bug. What works (tested in OM and Dymola) is this below:
class Expandable
expandable connector Conn
Real v[3];
end Conn;
connector RealOutput = output Real "'output Real' as connector";
Conn cc;
RealOutput a(start=0,fixed=true);
RealOutput b(start=0,fixed=true);
RealOutput c(start=0,fixed=true);
equation
der(a) = 1;
der(b) = 2;
der(c) = 3;
connect(a,cc.v[1]);
connect(b,cc.v[2]);
connect(c,cc.v[3]);
end Expandable;