I need to collect Firewall Rules data belonging to my client remotely using OPSEC API. I researched a little bit on the net and found out that I could use OPSEC API's LEA(Log Export API)(more info: https://www.fir3net.com/Firewalls/Check-Point/a-quick-guide-to-checkpoints-opsec-lea.html). I also found out that there is a project named fw1-lograbber (https://github.com/certego/fw1-loggrabber). I am quite new to network security and practically do not know anything about CheckPoint Firewalls. So my question is a brief explanation on the basics of Firewall Rules for CheckPoint and how to collect them using OPSEC API. More specifically are the rules included in the checkpoint logs or is there a specific method in LEA to grab the rules?
Thank you all!
To collect firewall rules you can't use LEA, this API is able to work with CheckPoint logs.
To read rules and objects you need to use CPMI API. There are some samples at CheckPoint website, ask for your customer to download the documentation and sample.
CheckPoint OPSEC Documentation: https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solutionid=sk63026
Related
I'm developing a Policycenter integration with BillingCenter. I did the initial step-by-step according to the documentation, but when changing some field of an account in PolicyCenter, the synchronization is not performed as in BillingCenter.
I need to sync PolicyCenter account updates with BillingCenter but I couldn't find anything specific in the documentation
To resolve the issue with the PolicyCenter integration with BillingCenter not synchronizing properly, you can try the following steps:
Check the configuration settings: Ensure that the integration
settings are configured correctly, including the mapping of fields
between PolicyCenter and BillingCenter.
Verify data consistency: Make sure that the data in PolicyCenter and
BillingCenter is consistent and meets the required format. This
includes verifying the data types, length, and values of the fields.
Monitor system logs: Check the logs of both PolicyCenter and
BillingCenter for any error messages or exceptions related to the
integration. This can help you identify any issues with the data or
communication between the two systems.
Test the integration: Run a test of the integration to see if it is
working properly. This can help you identify any issues with the
integration or data flow.
Contact support: If you are unable to resolve the issue, reach out
to the Guidewire support team for assistance. They can provide
guidance on how to troubleshoot the issue and resolve any issues
that may be causing the synchronization problem.
If you need further guidance on any of these steps or have any other questions, you can consult the Guidewire documentation or reach out to the support team for help.
Try preUpdate (DemoPreUpdate.gs)
I have a local VPS that hosting and providing my Node.js REST API in my country.
However soon I will need to open it for different countries.
That means that clients from remote will ask for my services.
Since they are far it will be probably slow connection.
How can I avoid this? Maybe I need more servers located in their countries too, but still, how the data could be shared over one DB?
I do not looking for a full tutorial for how to do that (could be nice to have) but I am looking for get info about the methodology of this.
What do you recommend to do, keep buying servers in remote countries, sharing their data between them someway, or maybe choose to use some cloud service like Firebase? How cloud services work in first place?
Without going into too much detail for each item, here are some keypoints in which I think you should focus your on learning to solve your problem.
For data storage - look into firestore (not the json database) as firestore is globally scaleable.
For your REST endpoints I would use google cloud functions, but without knowing the nature of your application its hard to say if its suitable. The key to being able to reach global scale is having cacheable endpoints. Then you are leveraging google's global CDN which is much faster than hitting the origin server. Note: The firebase cloud functions infrastructure WILL face cold start issues which may/may not be a problem for you.
Cache invalidation is a little lacking so you can leverage longer max-age cache settings but use either cache busing and/or the header stale-while-revalidate to help with this.
There is some great info here https://www.youtube.com/watch?v=dbV-293m1dQ that covers some of what I have mentioned in more detail.
I am completely new to elasticsearch but I like it very much. The only thing I can't find and can't get done is to secure elasticsearch for production systems. I read a lot about using nginx as a proxy in front of elasticsearch but I never used nginx and never worked with proxies.
Is this the typical way to secure elasticsearch in production systems?
If so, are there any tutorials or nice reads that could help me to implement this feature. I really would like to use elasticsearch in our production system instead of solr and tomcat.
There's an article about securing Elasticsearch which covers quite a few points to be aware of here: http://www.found.no/foundation/elasticsearch-security/ (Full disclosure: I wrote it and work for Found)
There's also some things here you should know: http://www.found.no/foundation/elasticsearch-in-production/
To summarize the summary:
At the moment, Elasticsearch does not consider security to be its job. Elasticsearch has no concept of a user. Essentially, anyone that can send arbitrary requests to your cluster is a “super user”.
Disable dynamic scripts. They are dangerous.
Understand the sometimes tricky configuration is required to limit access controls to indexes.
Consider the performance implications of multiple tenants, a weakness or a bad query in one can bring down an entire cluster!
Proxying ES traffic through nginx with, say, basic auth enabled is one way of handling this (but use HTTPS to protect the credentials). Even without basic auth in your proxy rules, you might, for instance, restrict access to various endpoints to specific users or from specific IP addresses.
What we do in one of our environments is to use Docker. Docker containers are only accessible to the world AND/OR other Docker containers if you explicitly define them as such. By default, they are blind.
In our docker-compose setup, we have the following containers defined:
nginx - Handles all web requests, serves up static files and proxies API queries to a container named 'middleware'
middleware - A Java server that handles and authenticates all API requests. It interacts with the following three containers, each of which is exposed only to middleware:
redis
mongodb
elasticsearch
The net effect of this arrangement is the access to elasticsearch can only be through the middleware piece, which ensures authentication, roles and permissions are correctly handled before any queries are sent through.
A full docker environment is more work to setup than a simple nginx proxy, but the end result is something that is more flexible, scalable and secure.
Here's a very important addition to the info presented in answers above. I would have added it as a comment, but don't yet have the reputation to do so.
While this thread is old(ish), people like me still end up here via Google.
Main point: this link is referenced in Alex Brasetvik's post:
https://www.elastic.co/blog/found-elasticsearch-security
He has since updated it with this passage:
Update April 7, 2015: Elastic has released Shield, a product which provides comprehensive security for Elasticsearch, including encrypted communications, role-based access control, AD/LDAP integration and Auditing. The following article was authored before Shield was available.
You can find a wealth of information about Shield here: here
A very key point to note is this requires version 1.5 or newer.
Ya I also have the same question but I found one plugin which is provide by elasticsearch team i.e shield it is limited version for production you need to buy a license and please find attached link for your perusal.
https://www.elastic.co/guide/en/shield/current/index.html
I don't have much idea in medical domain.
We evaluating a requirement from our client who is using Cerner EMR system
As per the requirement we need to expose the Cerner EMR or fetch some EMR / EHR data and to display it in SharePoint 2013 portal.
To meet this requirement what kind of integration options Cerner proposes. Is there any API’s or Web services exposed which can be used to build custom solutions for the same?
As far as I know Cerner did expose EMR / EHR information in HL7 format, but i don't have any idea how to access that.
I had also requested Cerner for the same awaiting replies from their end.
If anybody who have associated with similar kind of job can through some light and provide me with some insights.
You will need to request an interface between your organization and the facility with the EMR. An interface in the Health Care IT world is not the same as a GUI. Is is the mechanism (program/tool) that transfers HL7 data between one entity and the other. There will probably be a cost to have an interface setup. However, that is the traditional way Cerner communicates with 3rd parties. HIPAA laws will require that this connection be very secure.
You might also see if the facility with the EMR has an existing interface that produces the info you are after. You may be able to share that data or have a flat file generated from that interface that you could get access to. Because of HIPAA regulations, your client may be reluctant to share information in that manner.
I would suggest you start with your client's interface/integration team. They would be the ones that manage the information into and out of Cerner. They could also shed some light on how they prefer to see things done.
Good Luck
There are two ways of achieving this as I know. One is a direct connectivity to Cerner's Oracle database. This seems less likely to be possible as Cerner doesn't allow other vendors to have a direct access to their database.
The other way is to use Cerner's mPage Web Services. We have achieved this using mPage Web Services. The client needs to host the web services on a IBM WAS or some other container. We used WAS as that was readily available to us. Once the hosting is done, you will get a URL and using that you can execute any CCL program which will return you the data in JSON/XML format. mPage webservice has a basic HTTP authentication.
Now, CCL has to be written in a way which can return you the data you require.
We have a successful setup and have been working on this since 2014. For more details you can also try uCERN portal.
Thanks,
Navin
I am developing an social app on iOS that have many-to-many relation, local persistency, and user interaction. I have tried using native Parse API in iOS and find it too cumbersome to do all the client-server logic. So my focus shifted to finding a syncing solution.
After some research I found AFIncrementalStore quite easy to use and it's highly integrated in CoreData. I just started to work on this and I have two questions to ask:
1) How to do the authentication process? Is it in AFRESTClient?
2) How to set up AFRESTClient to match Parse's REST API? (an example would be great!)
P.S. I also found FTASync, which seems to be another solution. Any thought on this framework?
Any general suggestion on client-server syncing solutions will be highly appreciated!
Thanks,
Lei Zhang
Back with iOS 5 Apple silently rolled out NSIncrementalStore to manage connection between APIs and persistent stores. Because I couldn't word it better myself:
NSIncrementalStore is an abstract subclass of NSPersistentStore designed to "create persistent stores which load and save data incrementally, allowing for the management of large and/or shared datasets". And while that may not sound like much, consider that nearly all of the database adapters we rely on load incrementally from large, shared data stores. What we have here is a goddamned miracle.
Source: http://nshipster.com/nsincrementalstore/
That being said, I've been working on my own NSIncrementalStore (built specifically for Parse and utilizing the Parse iOS/OS X SDK) and you're welcome to check out/use/contribute to the project at https://github.com/sbonami/PFIncrementalStore.
Take a look at this StackOverflow question and at Chris Wagner's article on raywenderlich.com.
The linked SO question has examples for how to include the authentication token with each request to Parse. So you'll just need to have the user log in first, and store their token to include it with each subsequent request.
Chris Wagner's tutorial has a sample AFHTTPClient named SDAFParseApiClient to communicate with the Parse REST API. You'd have to adapt it to be an AFRESTClient subclass, but it should give you a start.
Some other thoughts between the two solutions you're considering:
AFIncrementalStore does not allow the user to make any changes without a network connection, while FTASync keeps a full Core Data SQLite store locally and syncs changes to the server when you tell it to.
FTASync requires you to make all your synched managed objects subclasses of FTASyncParent, with extra properties for sync metadata. AFIncrementalStore keeps its metadata behind the scenes, not in your model.
FTASync appears not to be widely used and hasn't been updated in over a year; if you use it you will likely be maintaining it.