How would one modify the total number of available talents a character has in Azerothcore? - world-of-warcraft

So I'm trying to make a server where my friends can log on, hit 80, and then have above the normal 71 talent points at level 80. I can't seem how to figure out how to implement this, is there a SQL table that holds this information that I can directly update, or is there a better less direct way to change it?

You can edit the World.conf file, to add more talentpoints as players level up.
Look for the
#
# Rate.Talent
# Description: Talent point rate.
# Default: 1
and set it to e.g. 3. This will produce 3 talents points pr level up

Related

Spamassassin - score by time of day sent

Is there a way to assign a score for mail sent between certain hours. I find a lot of spam is sent in the middle of the night so would like to give anything between say 2am and 5am a score of 2 or 3.
You can use SpamAssassin to penalize mail received within certain hours, but it's messy.
Before we start, verify that SA's primary defenses are properly set up:
DNS Blocklists, including DNSBLs & URIBLs, are a necessity; set them up before all else
Bayes in SpamAssassin is another must-have, though it requires training
Use Razor for fuzzy matching (see also Installing Razor) if the license works for you.
If that's insufficient, then you can address the sort of issue you're keying on. Try:
The RelayCountry plugin to penalize countries you never converse with
The TextCat plugin to discriminate against the languages you never converse in
If all that doesn't help enough, then (and only then) you can consider what you proposed. Read on.
Don't forget about time zones. You can't use the Date header for this reason. This type of rule is not safe for deployments that have conversations that span too many time zones, and you must ensure the MX record servers are all consistent and on the same time zone. Be aware that daylight savings (aka “summer time”) can be annoying here.
Identify a relay that your receiving infrastructure adds but is added before SpamAssassin runs (so SA can see it). This will manifest as a Received header near the top of your email. Again, make sure it's actually visible to SpamAssassin; the Received header added by your IMAP server will not be visible.
It is possible that you have SpamAssassin configured to run before any internal relay is stamped into the message. If this is the case, do not proceed further as you cannot reliably determine the local time.
Okay, all caveats aside, here's an example Received header:
Received: from external-host.example.com
(external-host.example.com [198.51.100.25])
by mx.mydomain with ESMTPS id ABC123DEF456;
Fri, 13 Mar 2020 12:34:56 -0400 (EDT)
This must be a header one of your systems adds or else it could have a different time zone, clock skew, or even a forged timestamp.
Match that in a rule that clearly denotes you as the author (by convention, start it with your initials):
header CL_RCVD_WEE_HOURS Received =~ /\sby\smx\.mydomain\swith\s[^:]{9,64}+:(?<=[0 ][2-4]:)[0-9:]{5}\s-0[45]00\s/
describe CL_RCVD_WEE_HOURS Received by our mx.mydomain relay between 2a and 5a EST/EDT
score CL_RCVD_WEE_HOURS 0.500
A walk through that regex (see also an interactive explanation at Regex101):
First, you need to verify that it's your relay, matched by name: by mx.mydomain with
Then, skip ahead 9-64 non-colon characters (quickly, with no backtracking, thus the + sign). You'll need to verify your server doesn't have any colons here
The real meat is in a look-behind (since we actually skipped over the hour for speed purposes), which seeks the leading zero (or else a space) and then the 2, 3, or 4 (not 5 since we don't want to match a time like 05:59:59)
Finally, there's a sanity check to ensure we're looking at the right time zone. I assumed you're in the US on the east coast, which is -0400 or -0500 depending on whether daylight savings is in effect
So you'll need to change the server name, review whether the colon trick works with your relay, and possibly adjust the time zone regex.
I also gave this a lower score than you desired. Start low and slowly raise it as needed. 3.000 is a really high value.

How to change the middle node location in the torrc?

I am trying to edit my torrc and make all of the nodes funnel through one country.
So far I am able to force the entry and exit nodes but don't know how to change the middle node... any ideas?
I have already tried "MiddleNodes" and "RelayNodes"
EntryNodes {us},{ca}
ExitNodes {us},{ca}
StrictNodes 1
It's possible to restrict to MiddleNodes per Tor docs: https://2019.www.torproject.org/docs/tor-manual.html.en
MiddleNodes node,node,…
A list of identity fingerprints and country
codes of nodes to use for "middle" hops in your normal circuits.
Normal circuits include all circuits except for direct connections to
directory servers. Middle hops are all hops other than exit and entry.
This is an experimental feature that is meant to be used by
researchers and developers to test new features in the Tor network
safely. Using it without care will strongly influence your anonymity.
This feature might get removed in the future. The HSLayer2Node and
HSLayer3Node options override this option for onion service circuits,
if they are set. The vanguards addon will read this option, and if
set, it will set HSLayer2Nodes and HSLayer3Nodes to nodes from this
set. The ExcludeNodes option overrides this option: any node listed in
both MiddleNodes and ExcludeNodes is treated as excluded. See the
ExcludeNodes option for more information on how to specify nodes.
Edit: See new answer by #user1652110 describing MiddleNodes option which was added in January 2019.
There is no option to do so. The closest option you can try is ExcludeNodes by using as large a list of country codes as you can come up with that doesn't include the countries you do want to use.
Also note, at the time of writing, limiting your circuits' entry and exit points to relays in the US and Canada might severely limit your performance, anonymity, and reliability since there just aren't that many high-bandwidth exits and guards in these two countries.

1000 rows limit for chef-api module/wrapper

So im using this Node module to connect to chef from my API.
https://github.com/normanjoyner/chef-api
The same contains a method called "partialSearch" which will fetch determined data for all nodes that match a given criteria. The problem I have, on of our environments have 1386 nodes attached to it, but it seems the module only returns 1000 as a maximum.
There does not seem to be any method to "offset" the results. This module works pretty well and its a shame this feature is not implemented since its lack really breaks the utility of such.
Does someone bumped into a similar issue with this module and can advise how to workaround it?
Here its an extract of my code :
chef.config(SetOptions(environment));
console.log("About to search for any servers ...");
chef.partialSearch('node',
{
q: "name:*"
},
{
name: ['name'] ,
'ipaddress': ['ipaddress'] ,
'chef_environment': ['chef_environment'] ,
'ip6address': ['ip6address'],
'run_list': ['run_list'],
'chef_client': ['chef_client'],
'ohai_time': ['ohai_time']
}
, function(err, chefRes) {
Regards!
The maximum is 1000 results per page, you can still request pages in order. The Chef API doesn't have a formal cursor system for pagination so it's just separate requests with a different start value, which can sometimes lead to minor desync (as in an item at the end of one page might shift in ordering and also show up at the start of the next page) so just make sure you handle that. That said, the fancy API in the client library you linked doesn't seem to expose that option, so you'll have to add it or otherwise workaround the problem. Check out https://github.com/sethvargo/chef-api/blob/master/lib/chef-api/resources/partial_search.rb#L34 for a Ruby implementation that does handle it.
We have run into similar issues with Chef libraries. One work-around you might find useful is if you have some node attribute that you can use to segment all of your nodes into smaller groups that are less than 1000.
If you have no such natural segmentation friendly already, a simple implementation would be to create a new attribute called segment and during your chef runs set the attribute's value randomly to a number between 1 and 5.
Now you can perform 5 queries (each query will only search for a single segment) and you should find all your nodes and if the randomness is working each group will be sized about 275 (1386/5).
As your node population grows you'll need to keep increasing the number of segments to ensure the segment sizes are less than 1000.

How to work with the COUNTER in Nagios or RRD?

I have the following problem:
I want to do the statistics of data that need to be constantly increasing. For example, the number of visits to the link. After some time be restarted these visit and start again from the beginning. To have a continuous increase, want to do the statistics somewhere. For this purpose, use a site that does this. In his condition can be used to COUNTER, GAUGE, AVERAGE, ... a.. I want to use the COUNTER. The system is built on Nagios.
My question is how to use this COUNTER. I guess it is the same as that of the RRD. But I met some strange things in the creation of such a COUNTER.
I submit the values ' 1 ' then ' 2 ' and the chart to come up 3. When I do it doesn't work. After the restart, for example, and submit again 1 to become 4
Anyone dealt with these things tell me briefly how it works with this COUNTER.
I saw that the COUNTER is used for traffic on routers, etc, but I want to apply for a regular graph, which just increases.
The RRD data type COUNTER will convert the input data into a rate, by taking the difference between this sample and the last sample, and dividing by the time interval (note that data normalisation also takes place and this is dependent on the Interval setting of the RRD)
Thus, updating with a constantly increasing count will result in a rate-of-change value to be graphed.
If you want to see your graph actually constantly increasing, IE showing the actual count of packets transferred (for example) rather than the rate of transfer, you would need to use type GAUGE which assumes any rate conversion has already been done.
If you want to submit the rate values (EG, 2 in the last minute), but display the overall constantly increasing total (in other words, the inverse of how the COUNTER data type works), then you would need to store the values as GAUGE, and use a CDEF in your RRDgraph command of the form CDEF:x=y,PREV,+ to obtain the ongoing total. Of course you would only have this relative to the start of the graph time window; maybe a separate call would let you determine what base value to use.
As you use Nagios, you may like to investigate Nagios add-ons such as pnp4nagios which will handle much of the graphing for you.

explain me a difference of how MRTG measures incoming data

Everyone knows that MRTG needs at least one value to be passed on it's input.
In per-target options MRTG has 'gauge', 'absolute' and default (with no options) behavior of 'what to do with incoming data'. Or, how to count it.
Lets look at the elementary, yet popular example :
We pass cumulative data from network interface statistics of 'how much packets were recieved by the interface'.
We take it from '/proc/net/dev' or look at 'ifconfig' output for certain network interface. The number of recieved bytes is increasing every time. Its cumulative.
So as i can imagine there could be two types of possible statistics:
1. How fast this value changes upon the time interval. In oher words - activity.
2. Simple, as-is growing graphic that just draw every new value per every minute (or any other time interwal)
First graphic will be saltatory (activity). Second will just grow up every time.
I read twice rrdtool's and MRTG's docs and can't understand which option mentioned above counts what.
I suppose (i am not sure) that 'gauge' draw values as is, without any differentiation calculations (good for measuring how much memory or cpu is used every 5 minutes). And default or 'absolute' behavior tryes to calculate the speed between nearby measures, but what's the differencr between last two?
Can you, guys, explain in a simple manner which behavior stands after which option of three options possible?
Thanks in advance.
MRTG assumes that everything is being measured as a rate (even if it isnt a rate)
Type 'gauge' assumes that you have already calculated the rate; thus, the provided value is stored as-is (after Data Normalisation). This is appropriate for things like CPU usage.
Type 'absolute' assumes the value passed is the count since the last update. Thus, the value is divided by the number of seconds since the last update to get a rate in thingies per second. This is rarely used, and only for certain unusual data sources that reset their value on being read - eg, a script that counts the number of lines in a log file, then truncates the log file.
Type 'counter' (the default) assumes the value passed is a constantly growing count, possibly that wraps around at 16 or 64 bits. The difference between the value and its previous value is divided by the number of seconds since the last update to get a rate in thingies per second. If it sees the value decrease, it will assume a counter wraparound at 16 or 64 bit. This is appropriate for something like network traffic counters, which is why it is the default behaviour (MRTG was originally written for network traffic graphs)
Type 'derive' is like 'counter', but will allow the counter to decrease (resulting in a negative rate). This is not possible directly in MRTG but you can manually create the necessary RRD if you want.
All types subsequently perform Data Normalisation to adjust the timestamp to a multiple of the Interval. This will be more noticeable for Gauge types where the value is small than for counter types where the value is large.
For information on this, see Alex van der Bogaerdt's excellent tutorial

Resources