When to use drmModeFreeResources after a drmModeGetResources? - linux

If I'm working with drm on linux and trying to get the number of displays/connectors on a gpu, when do I need to call drmModeFreeResources/Connector?
drmModeResPtr drmResources = drmModeGetResources(drmFd);
for (int i = 0; i < drmResources->count_connectors; i++) {
drmModeConnectorPtr connector;
connector = drmModeGetConnector(drmFd, drmResources->connectors[i]);
if (connector && connector->connection == DRM_MODE_CONNECTED) {
do_something();
}
drmModeFreeConnector(connector); // Do I need to do this?
}
drmModeFreeResources(drmResources); // Do I need to do this?
Do I need to free every time I get resources/connector? Or do I only need to do it after I'm done with the resources and want to destroy them, such as when the connector or display is no longer connected?
Thanks

Related

node-maxmind usage for streaming data with IP address

I have streaming data coming in with IP address. I want to translate the IP to longitude and latitude before putting the data into my database.
This is what I was doing but it is causing some issues. I also tried putting locationObject outside the for loop. That weirdly is using a lot of memory. I know this is blocking code but it should be fast. Though I see memory issue as data object is coming from a stream continuously ans each data object is huge.
for (var i ==0; i < data.length; i++){
if (data.client_ip !== null) {
var locationLookup = maxmind.openSync('./GeoIP2-City.mmdb');
var ip = data.client_ip;
var maxmindObj = locationLookup.get(ip);
locationObject.country = maxmindObj.country.names.en;
locationObject.latitude = maxmindObj.location.latitude;
locationObject.longitude = maxmindObj.location.longitude;
}
}
Again trying to put maxmind.openSync('./GeoIP2-City.mmdb'); outside fr loop is causing huge increase in memory.
The Other option is to use nonblocking code
maxmind.open('/path/to/GeoLite2-City.mmdb', (err, cityLookup) => {
var city = cityLookup.get('66.6.44.4');
});
But I don't think this is a good dea to put this inside a loop.
How can I handle this? I am getting data object every minute
https://github.com/runk/node-maxmind
I'm not sure why you think reading the database file for each iteration of the loop would be fast ("blocking code" doesn't equal "fast code"), it's much better to read the database file once and then loop over data.
maxmind.openSync() will read the entire database into memory, which is mentioned in the README:
Be careful with sync version! Since mmdb files are quite large
(city database is about 100Mb) fs.readFileSync blocks whole
process while it reads file into buffer.
If you don't have memory to spare, the only other option would be to open the file asynchronously. Again, not inside the loop, but outside of it:
maxmind.open("./GeoIP2-City.mmdb", (err, locationLookup) => {
for (var i = 0; i < data.length; i++) {
if (data.client_ip !== null) {
var ip = data.client_ip;
var maxmindObj = locationLookup.get(ip);
locationObject.country = maxmindObj.country.names.en;
locationObject.latitude = maxmindObj.location.latitude;
locationObject.longitude = maxmindObj.location.longitude;
}
}
});
The only thing I am worried is over time I call this function so many times. every time my consumers read jsonObject from kakfa (happening every minute). is there a much better way to optimize that as well. so I call this function every minute. How can I better optimize this further
function processData(jsonObject) {
maxmind.open('./GeoIP2-City.mmdb', function(err, locationLookup) {
if (err) {
logger.error('something went wrong on maxmind fetch', err);
}
for (var i = 0; i < jsonObject.length; i++) { ...}
})
}

How can you detect if ANY sounds are currently playing in soundJS?

How can you detect if any sounds are playing in soundJS?
I have lots of sounds firing on and off sometimes legitimately over the top of each other. I need a way to find out if any sounds are playing at any given time
ie. something like
createjs.Sound.isPlaying()
or
createjs.Sound.status()
Nothing exists like this in SoundJS currently.
You can look it up yourself, but it involves digging into private members, which is not recommended, and could break content down the road. Here is a quick sample:
function countActiveSounds() {
var s = createjs.Sound.activePlugin,
count = 0;
for (var n in s._soundInstances) {
var inst = s._soundInstances[n];
for (var i=0, l=inst.length; i<l; i++) {
var p = inst[i];
if (p.playState == "playSucceeded") { count++; }
}
}
return count;
}
This involves reading the private _soundInstances hash, and checking if the sound state is "playSucceeded". Once it is complete, the state will changed to "playFinished".
Again, use this with caution :)
It might make sense to log a feature request to the SoundJS GitHub.

How to limit the amount of requests per ip in Node.JS?

I'm was trying to think of a way to help minimize the damage on my node.js application if I ever get a DDOS attack. I want to limit requests per IP. I want to limit every IP address to so many requests per second. For example: No IP address can exceed 10 requests every 3 seconds.
So far I have come up with this:
http.createServer(req, res, function() {
if(req.connection.remoteAddress ?????? ) {
block ip for 15 mins
}
}
If you want to build this yourself at the app server level, you will have to build a data structure that records each recent access from a particular IP address so that when a new request arrives, you can look back through the history and see if it has been doing too many requests. If so, deny it any further data. And, to keep this data from piling up in your server, you'd also need some sort of cleanup code that gets rid of old access data.
Here's an idea for a way to do that (untested code to illustrate the idea):
function AccessLogger(n, t, blockTime) {
this.qty = n;
this.time = t;
this.blockTime = blockTime;
this.requests = {};
// schedule cleanup on a regular interval (every 30 minutes)
this.interval = setInterval(this.age.bind(this), 30 * 60 * 1000);
}
AccessLogger.prototype = {
check: function(ip) {
var info, accessTimes, now, limit, cnt;
// add this access
this.add(ip);
// should always be an info here because we just added it
info = this.requests[ip];
accessTimes = info.accessTimes;
// calc time limits
now = Date.now();
limit = now - this.time;
// short circuit if already blocking this ip
if (info.blockUntil >= now) {
return false;
}
// short circuit an access that has not even had max qty accesses yet
if (accessTimes.length < this.qty) {
return true;
}
cnt = 0;
for (var i = accessTimes.length - 1; i >= 0; i--) {
if (accessTimes[i] > limit) {
++cnt;
} else {
// assumes cnts are in time order so no need to look any more
break;
}
}
if (cnt > this.qty) {
// block from now until now + this.blockTime
info.blockUntil = now + this.blockTime;
return false;
} else {
return true;
}
},
add: function(ip) {
var info = this.requests[ip];
if (!info) {
info = {accessTimes: [], blockUntil: 0};
this.requests[ip] = info;
}
// push this access time into the access array for this IP
info.accessTimes.push[Date.now()];
},
age: function() {
// clean up any accesses that have not been here within this.time and are not currently blocked
var ip, info, accessTimes, now = Date.now(), limit = now - this.time, index;
for (ip in this.requests) {
if (this.requests.hasOwnProperty(ip)) {
info = this.requests[ip];
accessTimes = info.accessTimes;
// if not currently blocking this one
if (info.blockUntil < now) {
// if newest access is older than time limit, then nuke the whole item
if (!accessTimes.length || accessTimes[accessTimes.length - 1] < limit) {
delete this.requests[ip];
} else {
// in case an ip is regularly visiting so its recent access is never old
// we must age out older access times to keep them from
// accumulating forever
if (accessTimes.length > (this.qty * 2) && accessTimes[0] < limit) {
index = 0;
for (var i = 1; i < accessTimes.length; i++) {
if (accessTimes[i] < limit) {
index = i;
} else {
break;
}
}
// remove index + 1 old access times from the front of the array
accessTimes.splice(0, index + 1);
}
}
}
}
}
}
};
var accesses = new AccessLogger(10, 3000, 15000);
// put this as one of the first middleware so it acts
// before other middleware spends time processing the request
app.use(function(req, res, next) {
if (!accesses.check(req.connection.remoteAddress)) {
// cancel the request here
res.end("No data for you!");
} else {
next();
}
});
This method also has the usual limitations around IP address monitoring. If multiple users are sharing an IP address behind NAT, this will treat them all as one single user and they may get blocked due to their combined activity, not because of the activity of one single user.
But, as others have said, by the time the request gets this far into your server, some of the DOS damage has already been done (it's already taking cycles from your server). It might help to cut off the request before doing more expensive operations such as database operations, but it is even better to detect and block this at a higher level (such as Nginx or a firewall or load balancer).
I don't think that is something that should be done at the http server level. Basically, it doesn't prevent users to reach your server, even if they won't see anything for 15 minutes.
In my opinion, you should handle that within your system, using a firewall. Although it's more a discussion for ServerFault or SuperUser, let me give you a few pointers.
Use iptables to setup a firewall on your entry point (your server or whatever else you have access to up the line). iptables allows you to set a limit of max connections per IP. The learning curve is pretty steep though if you don't have a background in Networks. That is the traditional way.
Here's a good resource geared towards beginners : Iptables for beginners
And something similar to what you need here : Unix StackExchange
I recently came across a really nice package called Uncomplicated Firewall (ufw) it happens to have an option to limit connection rate per IP and is setup in minutes. For more complicated stuff, you'll still need iptables though.
In conclusion, like Brad said,
let your application servers do what they do best... run your application.
And let firewalls do what they do best, kick out the unwanted IPs from your servers.
It is not good if you use Nodejs filter the connection or apply the connection policy as that.
It is better if you use Nginx in front of NodeJS
Client --> Nginx --> Nodejs or Application.
It is not difficult and cheap because Ngnix is opensource tooo.
Good luck.
we can use npm Package
npm i limiting-middleware
Code :
const LimitingMiddleware = require('limiting-middleware');
app.use(new LimitingMiddleware({ limit: 100, resetInterval: 1200000 }).limitByIp());
// 100 request limit. 1200000ms reset interval (20m).
For more information: Click here

Node.js chronological issue

I have a problem with node.js. The commands of the program doesn't load cronologically and i don't know how to do it.
I'm trying to download some images and text from database and send it with packs of 8. But node.js runs for loop and command after loop at the same time.
Here's my code:
socket.on('background_dinamically', function(data){
connection.query("SELECT * FROM products WHERE id='"+data.cathegory+"'" , function(err, rows, fields){
var count = 0;
var array_elements = [];
if(err){
socket.emit('errorserver');
}else{
for (var i = rows.length - 1, count; i >= 0; i-- & count ++) {
array_elements.push(rows[i]);
if (count == 8) {
socket.emit('image_loading_background', [array_elements, data]);
count = 0;
array_elements = [];
}
};
if(count > 0 && count < 8 && count != 0) {
socket.emit('image_loading_background', [array_elements, data]);
}
}
});
});
Marc, first I would check if synchronisation can be done on the client side. If you force your nodejs app to synchronize before sending data to the client, scalability suffers.
If you cannot do without synchronizing on the server side, you can choose between spaghetti code or a sync lib.
Welcome to the world of asynchronous (not chronological) programming. By default, node will work on I/O operations in parallel as you are seeing. To get other behaviors including chronological (in serial), parallel batches, as well as error handling helpers, have a look at one of the many flow control libraries available. Specifically, I recommend caolan/async.

spotify models.Track object

is the spotify models.Track object restricted to non-local tracks only?
I've built a little spotify app that will calculate the total play time of some random assortment of tracks that are dropped into the sidebar, but it seems like the Track object doesn't get any information from local files.
Here's what I got so far:
models.application.observe(models.EVENT.LINKSCHANGED, function () {
totalTime = 0;
var links = models.application.links;
if (links.length) {
for (var i = 0; i < links.length; i++) {
var track = models.Track.fromURI(links[i], function(t) {
totalTime = totalTime + t.duration;
});
}
}
document.getElementById("time").innerHTML = secondsToString(Math.round(totalTime/1000)) ;
});
Everything is firing correctly and working great on spotify tracks, but the whole reason i wrote this little app was so that I could calculate the total time of some of my really long audiobook files. Anyone know of a solution?
Link to the documentation page.
If this is indeed the case, I think you've found a bug/oversight.
However, local file URIs look like this: spotify:local:Coldplay:Mylo+Xyloto:Paradise:277
That last parameter is the length of the track, in integral seconds. It's a hacky workaround, but you could parse the URI and use the figure from that instead of models.Track.

Resources