How to use fw1-loggrabber client - linux

While trying to run a fw1-leggrabber client as
fw1.loggrabber -l lea.conf --debug-level 3
i get the following debug message: debug message
I have installed CheckPointR75.20 Splat. I created a new OPSEC Application using the SmartDashBoard Client. It generated a Client DN. After configuring the checkpoint server I got the Server DN. Now in the lea.conf file, i have the entries as
opsec_sic_name "CN=FinalShot,O=cpmodule..gy9quu" (while creating OPSEC Application via Smart DashBoard)
lea_server opsec_entity_sic_name "o=cpmodule..gy9quu" (obtained from the server)
which is what i obtained from the above step.
The error i am getting is :
ERROR: SIC ERROR 111 - SIC Error for ssl_opsec: Peer sent wrong DN: cn=cp_mgmt,o=cpmodule..gy9quu
What might be the problem?
I saw that value DN: cn=cp_mgmt,o=cpmodule..gy9quu is in the section MySicName in the file $CPDIR/registry/HKLM_registry.data
And in the lea.conf file i'm supposed to put the server DN which is o=cpmodule..gy9quu. I dont know whats the problem here.
Thanks.

I did the following to solve the problem doing this:
I changed the line in the file:
$CPDIR/registry/HKLM_registry.data
containing
MySicName: cn=cp_mgmt,o=cpmodule..gy9quu
to
MySicName: o=cpmodule..gy9quu
Thanks anyways :)

Related

Airflow can't reach logs from webserver due to 403 error

I use Apache Airflow for daily ETL jobs. I installed it in Azure Kubernetes Service using the provided Helm chart. It's been running fine for half a year, but since recently I'm unable to access the logs in the webserver (this used to always work fine).
I'm getting the following error:
*** Log file does not exist: /opt/airflow/logs/dag_id=analytics_etl/run_id=manual__2022-09-26T09:25:50.010763+00:00/task_id=copy_device_table/attempt=18.log
*** Fetching from: http://airflow-worker-0.airflow-worker.default.svc.cluster.local:8793/dag_id=analytics_etl/run_id=manual__2022-09-26T09:25:50.010763+00:00/task_id=copy_device_table/attempt=18.log
*** !!!! Please make sure that all your Airflow components (e.g. schedulers, webservers and workers) have the same 'secret_key' configured in 'webserver' section and time is synchronized on all your machines (for example with ntpd) !!!!!
****** See more at https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#secret-key
****** Failed to fetch log file from worker. Client error '403 FORBIDDEN' for url 'http://airflow-worker-0.airflow-worker.default.svc.cluster.local:8793/dag_id=analytics_etl/run_id=manual__2022-09-26T09:25:50.010763+00:00/task_id=copy_device_table/attempt=18.log'
For more information check: https://httpstatuses.com/403
What have I tried:
I've made sure that the log file exists (I can exec into the airflow-worker-0 pod and read the file on command line in the location specified in the error).
I've rolled back my deployment to an earlier commit from when I know for sure it was still working, but it made no difference.
I was using webserverSecretKeySecretName in the values.yaml configuration. I changed the secret to which that name was pointing (deleted it and created a new one, as described here: https://airflow.apache.org/docs/helm-chart/stable/production-guide.html#webserver-secret-key) but it didn't work (no difference, same error).
I changed the config to use a webserverSecretKey instead (in plain text), no difference.
My thoughts/observations:
The error states that the log file doesn't exist, but that's not true. It probably just can't access it.
The time is the same in all pods (I double checked be exec-ing into them and typing date in the command line)
The webserver secret is the same in the worker, the scheduler, and the webserver (I double checked by exec-ing into them and finding the corresponding env variable)
Any ideas?
Turns out this was a known bug with the latest release (2.4.0) of the official Airflow Helm chart, reported here:
https://github.com/apache/airflow/discussions/26490
Should be resolved in version 2.4.1 which should be available in the next couple of days.

"Too many opened conversations. Please close them and try again."

When accessing my application on mulesoft, I am getting the following error:
Message : 42|Application|Too many opened conversations. Please close them and try again.
Element : null # my-application-1:null:null
--------------------------------------------------------------------------------
Exception stack is:
42|Application|Too many opened conversations. Please close them and try again. (org.mule.module.ws.consumer.SoapFaultException)
org.apache.cxf.binding.soap.interceptor.Soap11FaultInInterceptor.unmarshalFault(Soap11FaultInInterceptor.java:84)
org.apache.cxf.binding.soap.interceptor.Soap11FaultInInterceptor.handleMessage(Soap11FaultInInterceptor.java:51)
org.apache.cxf.binding.soap.interceptor.Soap11FaultInInterceptor.handleMessage(Soap11FaultInInterceptor.java:40)
org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:272)
org.apache.cxf.interceptor.AbstractFaultChainInitiatorObserver.onMessage(AbstractFaultChainInitiatorObserver.java:113)
org.apache.cxf.binding.soap.interceptor.CheckFaultInterceptor.handleMessage(CheckFaultInterceptor.java:69)
org.apache.cxf.binding.soap.interceptor.CheckFaultInterceptor.handleMessage(CheckFaultInterceptor.java:34)
org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:272)
org.apache.cxf.endpoint.ClientImpl.onMessage(ClientImpl.java:856)
org.mule.module.cxf.transport.MuleUniversalConduit.sendResultBackToCxf(MuleUniversalConduit.java:359)
org.mule.module.cxf.transport.MuleUniversalConduit.dispatchMuleMessage(MuleUniversalConduit.java:316)
org.mule.module.cxf.transport.MuleUniversalConduit$2.handleMessage(MuleUniversalConduit.java:223)
(227 more...)
(set debug level logging or '-Dmule.verbose.exceptions=true' for everything)
I am not sure where the error originates, is it because some limitation on the backend server, or should I look into internal limitations like "maxThreads"?
It looks like a SOAP fault is being received from remote web service. The description 42|Application|Too many opened conversations. Please close them and try again. seems to be sent by the web service and doesn't look to be generated by Mule.

Error in metadata reply for topic test (PartCnt 0): Broker: Unknown topic or partition' } using node-rdkafka

Hi am getting this error when am trying to connect to kafka remotely on my prod server.My messages are not getting produced and/or consumed from my code.Let me know if any code sample is needed.Just want to know what can be the reasons for receiving this error.
Just in case some one else go through the same issue: the issue for me was i was using it with kerberos the principal name and the keytab file that I was using didn't have permissions to create topic or produce/consume message!

Resource temporarily unavailable. Authentication by key failed (Error -18). (Error #35)

I'm using EC2 Amazon Web Service to launch my server using NodeJS, MongoDB.
I completed to save and load the data using my android application through NodeJS server and MongoDB but when I tried to check the data using RoboMongo (Robo 3T), the error occurred.
Resource temporarily unavailable. Authentication by key (path of the .pem key) failed (Error -18). (Error #35)
Robomongo 1
Robomongo 2
Error dialog
This is what I did in Robomongo.
These are the result of searching the google... I think I did right...
What is wrong?
I Solved the problem myself.
When you have this problem,
1. Check out /etc/mongod.conf
In network interfaces.
bindIP must be 0.0.0.0
not 127.0.0.1
2. Check the SSH User Name.
For an Amazon Linux AMI, the user name is ec2-user.
For the others, Check out the link ! https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html
3. If it didn't help,
try to download what developer uploaded (1.2 - Beta)
https://github.com/Studio3T/robomongo/issues/1189#issuecomment-353279070

Atlassian-connect: Error on 'installed' event

I'm trying to run example Jira add-on.
I have created credentials.json file and have run npm i and node app.js.
But I have problems with installed event. Here is nodejs log:
Watching atlassian-connect.json for changes
Add-on server running at http://MacBook-Air.local:3000
Initialized sqlite3 storage adapter
Local tunnel established at https://a277dbdf.ngrok.io/
Check http://127.0.0.1:4040 for tunnel status
Registering add-on...
GET /atlassian-connect.json 200 13.677 ms - 784
Saved tenant details for 608ff294-74b9-3edf-8124-7efae2c16397 to database
{ key: 'my-add-on',
clientKey: '608ff294-74b9-3edf-8124-7efae2c16397',
publicKey: 'MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCtKxrEBipTMXhRHlv9zcSLR2Y9h5YQgNQ5vpJ40tF9RmuIzByjkKTurCLHFwMAWU6aLQM+H+Z8wAlpL9AVlN5NKrEP8+a3mGFUOj/5nSJ7ZWHjgju0sqUruyEkKLvKuhWkKkd9NqBxogN0hxv7ue5msP5ezwei/nTJXmnmA5qOAQIDAQAB',
sharedSecret: 'LfT9elHM7iHkto5pHr+MnpH0SR1ypunIDoCyt6ugVJ1Q4hWHurG8k5DjVzLcvT2C98DDbiJiA89VNB0e3DiUvQ',
serverVersion: '100075',
pluginsVersion: '1.3.407',
baseUrl: 'https://gleb-olololololo-22.atlassian.net',
productType: 'jira',
description: 'Atlassian JIRA at https://gleb-olololololo-22.atlassian.net ',
eventType: 'installed' }
POST /installed?user_key=admin 204 51.021 ms - -
Failed to register with host https://gleb-olololololo-22%40yopmail.com:gleb-olololololo-22#gleb-olololololo-22.atlassian.net (200)
The add-on host did not respond when we tried to contact it at "https://a277dbdf.ngrok.io/installed" during installation (the attempt timed out). Please try again later or contact the add-on vendor.
{"type":"INSTALL","pingAfter":300,"status":{"done":true,"statusCode":200,"contentType":"application/vnd.atl.plugins.task.install.err+json","errorMessage":"The add-on host did not respond when we tried to contact it at \"https://a277dbdf.ngrok.io/installed\" during installation (the attempt timed out). Please try again later or contact the add-on vendor.","source":"https://a277dbdf.ngrok.io/atlassian-connect.json","name":"https://a277dbdf.ngrok.io/atlassian-connect.json"},"links":{"self":"/rest/plugins/1.0/pending/80928cb9-f64e-42d0-9a7e-a1fe8ba81055","alternate":"/rest/plugins/1.0/tasks/80928cb9-f64e-42d0-9a7e-a1fe8ba81055"},"timestamp":1513692335651,"userKey":"admin","id":"80928cb9-f64e-42d0-9a7e-a1fe8ba81055"}
Add-on not registered; no compatible hosts detected
I have reviewed tons of information in Google, but didn't found an answer.
More details, that can helps you to answer.
It happens suddenly. It worked OK, but about 1 week ago I start to get this error and cannot fix it. So I didn't change anything, just run add-on again, as I did it every day.
If I try to upload add-on manually I got error in terminal
GET / 302 17.224 ms - 0
GET /atlassian-connect.json 200 2.503 ms - 783
Found existing settings for client 608ff294-74b9-3edf-8124-7efae2c16397. Authenticating reinstall request
Authentication verification error: 401 Could not find authentication data on request
POST /installed?user_key=admin 401 22.636 ms - 45
The most possible reason (that I've found in google) is that I have wrong server time. But the time on my local machine is correct (at least for my timezone).
Anyone has any thoughts about this problem?
Thanks!
I kept randomly having this happen to me. It would be working, then run npm start and I would get the error. Since I'm not using a database right now, I simply removed all references to the juggling-sqlite database. This was in package.json, package-lock.json, config.json, and I just removed store.db. That got it working for me. Pretty frustrating that this happens, not sure a better way around it.

Resources