After I generate a hashed password and I try to run my tor-request module and I keep getting error. I have updated the new pass in my torr file, in my script and restarted tor. I still get this error and don't know how to fix. I see it was a commmon bug a few years ago but no soltuion. Thanks
Uncaught Error: Error communicating with Tor ControlPort
515 Authentication failed: Password did not match HashedControlPassword value from configuration. Maybe you tried a plain text password? If so, the standard requires that you put it in double quotes.
torr file
HashedControlPassword 16:142<MoreNumbers>49
Matches my pass generated through command line arg.
Related
I use Apache Airflow for daily ETL jobs. I installed it in Azure Kubernetes Service using the provided Helm chart. It's been running fine for half a year, but since recently I'm unable to access the logs in the webserver (this used to always work fine).
I'm getting the following error:
*** Log file does not exist: /opt/airflow/logs/dag_id=analytics_etl/run_id=manual__2022-09-26T09:25:50.010763+00:00/task_id=copy_device_table/attempt=18.log
*** Fetching from: http://airflow-worker-0.airflow-worker.default.svc.cluster.local:8793/dag_id=analytics_etl/run_id=manual__2022-09-26T09:25:50.010763+00:00/task_id=copy_device_table/attempt=18.log
*** !!!! Please make sure that all your Airflow components (e.g. schedulers, webservers and workers) have the same 'secret_key' configured in 'webserver' section and time is synchronized on all your machines (for example with ntpd) !!!!!
****** See more at https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#secret-key
****** Failed to fetch log file from worker. Client error '403 FORBIDDEN' for url 'http://airflow-worker-0.airflow-worker.default.svc.cluster.local:8793/dag_id=analytics_etl/run_id=manual__2022-09-26T09:25:50.010763+00:00/task_id=copy_device_table/attempt=18.log'
For more information check: https://httpstatuses.com/403
What have I tried:
I've made sure that the log file exists (I can exec into the airflow-worker-0 pod and read the file on command line in the location specified in the error).
I've rolled back my deployment to an earlier commit from when I know for sure it was still working, but it made no difference.
I was using webserverSecretKeySecretName in the values.yaml configuration. I changed the secret to which that name was pointing (deleted it and created a new one, as described here: https://airflow.apache.org/docs/helm-chart/stable/production-guide.html#webserver-secret-key) but it didn't work (no difference, same error).
I changed the config to use a webserverSecretKey instead (in plain text), no difference.
My thoughts/observations:
The error states that the log file doesn't exist, but that's not true. It probably just can't access it.
The time is the same in all pods (I double checked be exec-ing into them and typing date in the command line)
The webserver secret is the same in the worker, the scheduler, and the webserver (I double checked by exec-ing into them and finding the corresponding env variable)
Any ideas?
Turns out this was a known bug with the latest release (2.4.0) of the official Airflow Helm chart, reported here:
https://github.com/apache/airflow/discussions/26490
Should be resolved in version 2.4.1 which should be available in the next couple of days.
I am trying to export data from a mongodb cluster to my computer, using my URI connection string, but am getting the error: could not connect to server: connection() : auth error: sasl conversation error: unable to authenticate using mechanism "SCRAM-S HA-1": (AtlasError) bad auth Authentication failed
This is the command I am using:
mongoexport --uri="mongodb+srv://yash_verma:<******>#jspsych-eymdu.mongodb.net/test?retryWrites=true&w=majority" --collection=entries --out=entries.csv
Could anyone tell me what it is that I am doing wrong? I am sure I am using the correct password.
I am also fairly new to programming and have tried to look online for a solution, but haven't found one yet.
Any help would be greatly appreciated.
Thanks,
Yash.
Your connection string looks fine, but make sure to remove the angle brackets (<>) around <password>, like so:
mongoexport --uri="mongodb+srv://yash_verma:******#jspsych-eymdu.mongodb.net/test?retryWrites=true&w=majority" --collection=entries --out=entries.csv
…where ****** is the database password (not the account password!) of the database user yash_verma.
Duplicate answer
I am trying to create a script that will run commands over my 1000 Cisco devices.
The device model is: Cisco Sx220 Series Switch Software, Version 1.1.4.1
The issue is that there is some kind of strange behavior for some of those Cisco devices.
When I am trying to login with regular SSH (PUTTY) with the correct credentials we are first getting 'Authentication Failure' and after 1 seconds I am getting the User Password Prompt again, typing the same credentials again is giving me a successful login.
The problem is that when I am trying to connect using my script (uses ParallelSSHClient), the connection drops after getting the authentication failure message and not able to enter the credentials again since it is getting the exception and terminal the program.
I am looking for a way to enter those credentials manual by connecting to the machine, getting the Authentication Failure message and ignoring it, recognizing that the current prompt has the User or Password appears on screen and then send it manually.
I look for this kind of procedure anywhere but without any luck.
Does ParallelSSHClient has this feature?
If Paramiko has it, I am willing to move to Paramiko.
Thanks :)
try:
client = ParallelSSHClient(hosts=ip_list, user=user, password=password)
except Exception as err:
print("There was an issue with connecting to the machine")
command_output = client.run_command(command)
Here is the accrual error that I am getting:
pssh.exceptions.AuthenticationException: ('Authentication error while connecting to %s:%s - %s', '172.31.255.10', 22, AuthenticationException('Password authentication failed',))
In my ssh client I am using ssh_userauth_none before calling ssh_userauth_list(). But it always returns SSH_AUTH_ERROR. When I try to find out the reason for error using ssh_get_error() it says Socket error: disconnected.
I have used the same code on other linux machine (Ubuntu), it works fine. But when I try it on an embedded machine which is based on CentOS linux, it always fails with the error -SSH_AUTH_ERROR.
Am I missing any fields in ssh_config file which makes none authentication to work?.
Or is it related to some username/path issue?
While trying to run a fw1-leggrabber client as
fw1.loggrabber -l lea.conf --debug-level 3
i get the following debug message: debug message
I have installed CheckPointR75.20 Splat. I created a new OPSEC Application using the SmartDashBoard Client. It generated a Client DN. After configuring the checkpoint server I got the Server DN. Now in the lea.conf file, i have the entries as
opsec_sic_name "CN=FinalShot,O=cpmodule..gy9quu" (while creating OPSEC Application via Smart DashBoard)
lea_server opsec_entity_sic_name "o=cpmodule..gy9quu" (obtained from the server)
which is what i obtained from the above step.
The error i am getting is :
ERROR: SIC ERROR 111 - SIC Error for ssl_opsec: Peer sent wrong DN: cn=cp_mgmt,o=cpmodule..gy9quu
What might be the problem?
I saw that value DN: cn=cp_mgmt,o=cpmodule..gy9quu is in the section MySicName in the file $CPDIR/registry/HKLM_registry.data
And in the lea.conf file i'm supposed to put the server DN which is o=cpmodule..gy9quu. I dont know whats the problem here.
Thanks.
I did the following to solve the problem doing this:
I changed the line in the file:
$CPDIR/registry/HKLM_registry.data
containing
MySicName: cn=cp_mgmt,o=cpmodule..gy9quu
to
MySicName: o=cpmodule..gy9quu
Thanks anyways :)