I am having a FreeBSD server and using pycurl as a library for doing the curl.
Since it has gnutls too which is prone to CVE-2018-16868(Bleichenbacher type side-channel based padding attack), I am searching for a fix that can help in avoiding this issue.
I have surfed the internet but didn't get any information regarding this issue from pycurl perspective.
Thanks.
The FreeBSD port was updated to 3.6.5 on 19 Dec 2018 09:32:32
According to https://gitlab.com/gnutls/gnutls/blob/master/NEWS 3.6.5 implements the necessary patches against the attacks from that CVE.
See also https://gitlab.com/gnutls/gnutls/merge_requests/832 and https://gitlab.com/gnutls/gnutls/issues/630 for the patch and the bugticket.
So just updating your ports should fix the issue
Related
After fresh installing Windows 10, I was not able connect to a Parser Host using another Windows PC.
The error was:
Error: Client Connection Failure (-10)
According to the docs, this refers to DECODE_ERR_INIT
There might be several reasons for these two errors, however the Parsec docs does not give possible solutions.
In my case going to App & Features > Optional Features > Add Feature and then look for Media Feature Pack and install it, reboot and should work.
I was able to discover this issue due to Rainway and Dixter failing because a .dll was missing regarding this precise feature.
I am trying to connect RabbitMQ from Python. Here is the below code I am using
connection = pika.SelectConnection(parameters, self.on_connection_open, self.on_open_error_callback,
stop_ioloop_on_close=False)
I setup the configuration in RabbitMQ and copied the same in Python code also. But when running it throws the below error.
TypeError: _ _init_ _() got an unexpected keyword argument 'stop_ioloop_on_close'
Can anyone help me to fix this issue. For your information, I am using the latest versions of all the softwares.
Thanks in advance!!
For this issue, need to downgrade pika version to 0.11.2 and the recent versions throwing this error.
The problem is that the argument was removed in version 1.0.0 because of this issue. You should lock down the requirements to always make sure a version older than 1.0 is installed.
e.g.
Add something like this to the requirements file of your project.
pika<1.0
In addition it's probably worth looking into having the code fixed, and then removing the version restriction.
If you don't want to downgrade your version you can just remove the stop_ioloop_on_close argument. Since the functionality should have remained the same (i.e. now it's no longer checking what to do on close, so it will keep running).
If you want do want to close the connection you can use the on_close_callback parameter to call connection.ioloop.stop() yourself if needed.
Has anyone successfully connected to Sequel Server 7 (TDS 7.0) with node.js? How did you do it?
I've tried tedious and node-mssql, but the lowest version of the TDS protocol that they use is 7.1. I need to access a SQL Server 7 database, which only speaks TDS 7.0. (Ancient, I know . . .)
The only library I've found that looks like it works is node-tds, but it was abandoned long ago, so getting help with it is pretty unlikely. I get TypeError: invalid_argument when trying to connect, and there is no documentation on the connect() function. :(
Well, I got it to work, but it wasn't as simple as installing a single node module that contained all the necessary pieces. I ended up using node-odbc. You just have to install and configure a couple pre-requisites (unixODBC and FreeTDS). This was always a pain when I'd had to do it in the past, but this time around I found some instructions for installing both via Homebrew. It's probably just as easy with your package manager of choice. Configuring the setup was a little bit of work, but manageable by following the instruction guide at freetds.org.
I need to find out the correct way to avoid the below reccurence of the problem.
We have a cpp library which will read the data from httpd and send that data to one of our application. So the problem is the library is built with apr library. The problem is for many customers they are or might be running with newer Appache along with some other version of the apr library. Which is causing our library to fail with unresolved external symbols. To fix this what we are doing is building our library with same exact version of the apr library that the customer has and then we ship. So this a reoccurring problem as the customer are updating their Appache to the latest then again we see the same issue and will repeat the steps.
Is there any way to handle this kind of problem from occurring again in future?
I've been trying to work through an issue that developed while attempting to upgrade our testing environment from 12.04 to 14.04 ubuntu on aws. Prior to this, the Package repository version of solr was 1.4.1 which matched our 1.4.1 solrj client integrated with our application.
Changing the base AMI to the 14.04 latest and running our default deploy caused solr 3.6.2 server to be installed. It appears it was accepting our configs without issue, however when our client tried to connect we received different errors:
The first was an unknown custom field, which we traced back to our deployment scripts not moving our schema.xml and solrconfig.xml to /etc/solr/conf/ but keeping it in the base directory.
We corrected this issue, and then ran into the following:
'exception: Invalid version or the data in not in 'javabin' format'
This was generated by a wrapper ontop of solrj, but I'll be honest and say I know nothing regarding Solr and that this may be on our end. I've asked our dev team to look at 2 options:
1) enabling: 'server.setParser(new XMLResponseParser());'
Which is the recommendation on the backwards compatibility for an older client.
2) updating our client in the application to 3.6.2
-I know less about the requirements on this.
My fall back is to revert to 1.4.1, but it appears it hasn't been touched since 2011, which makes me hesitant.
Any thoughts / suggestions would be appreciated!
Thanks!
I think the best option is to maintain the same version of Solr and Solrj.
I used for a lot of time Solr 1.4.1 and, while as you said, the most part of it works with newer versions without any problem, actually a lot of things have been changed since 1.4.*
I did your same porting last year, (from 1.4.1 to 3.6.1) and I can confirm you that the 2nd way is the right one: all changes you must do in your client code are just "formal" and very very quick.
Any workaround you could do for being able to communicate with a different version (between Solrj and Solr) is just, as the word says, a "workaround" and it could lead to unexpected (hidden) side-effects later.