generate attack graph based on data file - security

I want to create an attack graph based on a dataset that I stored in a CVSS file. I have not written anything yet because I do not know how to start. Does anyone here have done something related to this? I hope you can share with me the basic idea. Because I really don't know how to start.

Related

full database table update

I currently have a REST endpoint with basic CRUD operations for a sqlite database.
But my application updates whole tables at a time (with a "save" button)
My current idea/solution is to query the data first, compare the data, and update only the "rows" that changed.
The solution is a bit complex because there are several different types of changes that can be done:
Add row
Remove row
Row content changed (similar to content moving up or down)
Is there a simpler solution?
The most simplest solution is a bit dirty. (Delete table, create table and add each row back)
The solution is a bit complex because there are several different types of changes that can be done:
Add row
Remove row
Row content changed (similar to content moving up or down)
Is there a simpler solution?
The simple answer is
Yes, you are correct.
That is exactly how you do it.
There is literally no easy way to do this.
Be aware that, for example, Firebase entirely exists to do this.
Firebase is worth billions, is far from perfect, and was created by the smartest minds around. It literally exists to do exactly what you ask.
Again there is literally no easy solution to what you ask!
Some general reading:
One of the handful of decent articles on this:
https://www.objc.io/issues/10-syncing-data/data-synchronization/
Secondly you will want to familiarize yourself with Firebase, since, a normal part of computing now is either using baas sync solutions (eg Firebase, usually some noSql solution), or indeed doing it by hand.
http://firebase.google.com/docs/ios/setup/
(I don't especially recommend Firebase, but you have to know how to use it in as much as you have to know how to do regex and you have to know how to write sql calls.)
Finally you can't make realistic iOS apps without Core Data,
https://developer.apple.com/library/archive/documentation/Cocoa/Conceptual/CoreData/index.html
By no means does core data solve the problem you describe, but, realistically you will use it while you solve the problem conceptually.
You may enjoy realm,
https://realm.io
which again is - precisely - a solution to the problem you describe. (Which is basically the basic problem in typical modern app development.) As with FBase, even if you don't like it or decide not to go with it on a particular project, one needs to be familiar with it.

Azure SQL Database Autotuning - Develop without worrying about indexes?

Azure's SQL database feature for auto-tuning creates and drops indexes based on database usage. I've imported an old database into Azure which did not have comprehensive indexes defined and it seems to of done a great job on reducing CPU & DTU usage over a relatively short period of time.
It feels wrong - but does this mean I can develop going forwards without defining indexes? Does anyone do this? SSMS index editor is a pain and slow to modify with. Not having to worry/maintain indexes would speed up development time.
The auto-tuning is taking advantage of three things, Query Store, Missing Indexes and Machine Learning.
First, the last known good plan is a weaponization of the Query Store. This is in both Azure and SQL Server 2017. It will spot queries that have degraded performance after a plan change (and quite a few executions, not just one) and will revert back to that plan. If performance degrades, it turns that off. This is great. However, if you write crap code or have bad data structures or out of date statistics, it doesn't help very much.
The automatic indexes in Azure are using two things, missing index suggestions from the optimizer and machine learning on Azure. With those, if the missing index comes up a lot over a period of 12-18 hours (read this blog post on automating it), you'll get an index suggestion. It measures for another 12-18 hours and if that index helped, it stays, if not, it goes. This is also great. However, it suffers from two problems. First, same as before, if you have crap code, etc., this will only really help at the margins. Second, the missing index suggestions from the optimizer are not always the best index. For example, when I wrote the blog post, it identified a missing index appropriately, but it missed the fact that an INCLUDE column would have been even better than the index it suggested.
A human brain and eyeball is still going to be solving the more difficult problems. These automations take away a lot of the easier, low-hanging problems. Overall, that's a great thing. However, don't confuse it with a panacea for all things performance related. It's absolutely not.

Storing RDF data in Cassandra

I saw some questions about use of Cassandra to store RDF data but they are really old (2013).
So, I have RDF time annotated data (RDFStream) and I'd like to store them in a Cassandra cluster. I know that probably it's not the best idea but I have research interest.
I'm looking for a way to do it because I'm not sure this is really possible. For now I'm really confused on how organise db.
Someone has experience or any hint about that? Is it possibile to do?

MongoDb for collection of production data

I am facing a new type of problem that I haven't tried tackling before. So I would like some pointers in the right direction by someone more knowledgeable than I :-)
I have been asked by a friend to help him design a control system for production line. The project sounds really interesting, and I can't stop thinking about it.
I have already found that I can control the system using a node.js server. So far so good (HTML5 interface here we come)! But where I really want this system to stand out is in the collection of system metrics. The system reports all kinds of things such as temperature, flow etc, and these metrics are reported up to several hundred times per second per metric... and this runs 24/7.
My thought is to persist this in a MongoDb database, and do some realtime statistics on this. The "competition", if you will, seems to save this in a SQL server database and allow the operators to export aggregated data to Excel, and do statistics in Excel.
What are the strategies for doing real time statistics using a MongoDb?
I would really like to provide instant feedback and monitoring based on these metrics. Such as average temperature over the last 24 hours, spikes etc, and also enable alerts. There will not be much advanced statistics done on the server. If that is needed, I would enable export of data to a program such as SPSS.
Is MongoDb a good fit for that? I would love to use a Linux machine instead of a Windows machine with SQL Server and a WinForms Control Interface. The license fees alone are enough to put me off, although I know it probably isn't the case for the people buying the machinery.
This will not be placed in the cloud, but rather on a single server on the network. Next to the machine being operated, I will place a touch interface that through a browser will contact the node.js server to invoke PLC commands. There can be multiple machines that need controlling, and they would all be controlled by the same central node.js server.
The machinery is controlled by PLC controllers from http://beckhoff.com/.
I am not a complete novice when it comes to MongoDb, but I have never put anything I have made into production, and I wouldn't put MongoDb on my CV... yet!
EDIT: It seems that the $inc operator is the way to go. But what if I wan't both the daily and hourly averages as well as a continuous feed that updates a chart on screen with data every second using socket.io. Is is a good idea to update a document for each of the aggregates I need. I really also want to save every measurement, but maybe I could aggregate that on a per second basis, so I don't store up to a 1000 records per second per metric?
MongoDB can definitely be used for your scenario. Look at http://www.slideshare.net/pstokes2/social-analytics-with-mongodb, http://docs.mongodb.org/manual/use-cases/pre-aggregated-reports/ or
Real-time statistics: MySQL(/Drizzle) or MongoDB? for more on this topic
What I am really looking for is the Aggregation Framework: http://docs.mongodb.org/manual/tutorial/aggregation-examples/
That gives me exactly the kind of stats that I would like to see. Use this to calculate sums and averages as I write, and then also allow for ad-hoc queries should they be needed.
For a little insight on performance, read this awesome blogpost!
http://devsmash.com/blog/mongodb-ad-hoc-analytics-aggregation-framework
Also, anyone else looking to do something like this should take a look at this to see how to save the individual events. I don't need to save data longer than a week for example, so a rolling log should be more than enough for me: http://blog.mongodb.org/post/172254834/mongodb-is-fantastic-for-logging
With this I am very close to having a really sweet setup, and I am beginning to feel confident that this is a good choice over MySQL or MSSQL.

Why isnt there a read analog of validate_doc_update in couchdb?

I am posing it as a suggested feature of couchdb because thats is the best way to express what i would like to achieve, and as a rant because i have not found a good reason for its lack:
Why not have a validate_doc_read(doc, userCtx) function so that I can implemen per-document read control? It would work exactly as validate_doc_update works, by throwing an error when you want to deny the read. What am I missing? Has someone found a workaround for per-document read control?
I'm not sure what the actual reason is, but having read validation would make reads very slow, and view indexes very hard to update incrementally (or perhaps impossible meaning that you'd basically have to have a per-user index).
The way to implement what you want is via filtered replication, so you create a new DB with only the documents you want a given user to be able to read.
The main problem to create a validate_doc_read, is how do we work with reduce functions with that behavior.
I can't believe thar a validate_doc_read is the best solution because we will give away one feature in favour of another.
In this way, you must restrict the view access using a proxy.

Resources