How can I add value to my args in discord.js? - node.js

message.channel.bulkDelete(args[0]+1)
.then(messages => message.channel.send(`${emojiyes} Deleted **${messages.size}** messages!`) | console.log(`Deleted ${messages.size} messages!`))
This causes deleting for example 21 messages, not 2 (_clear 2 deletes 21 messages, not 3). Can someone help me?

args[0] is a string and when combing that with one you are getting "2"+1 which results in 21. If you convert the string to a number first, it will calculate correctly. By using the parseInt() function we can convert the string into a number.
message.channel.bulkDelete(parseInt(args[0])+1)
.then(messages => message.channel.send(`${emojiyes} Deleted **${messages.size}** messages!`) | console.log(`Deleted ${messages.size} messages!`))

Related

Lost Clients PBI

I am trying to get the number of lost clients per month. The code I'm using for the measure is set forth next:
`LostClientsRunningTotal =
VAR currdate = MAX('Date'[Date])
VAR turnoverinperiod=[Turnover]
VAR clients=
ADDCOLUMNS(
Client,
"Turnover Until Now",
CALCULATE([Turnover],
DATESINPERIOD(
'Date'[Date],
currdate,
-1,
MONTH)),
"Running Total Turnover",
[RunningTotalTurnover]
)
VAR lostclients=
FILTER(clients,
[Running Total Turnover]>0 &&
[Turnover Until Now]=0)
RETURN
IF(turnoverinperiod>0,
COUNTROWS(lostclients))`
The problem is that I'm getting the running total and the result it returns is the following:
enter image description here
What I need is the lost clients per month so I tried to use the dateadd function to get the lost clients of the previous month and then subtract the current.
The desired result would be, for Nov-22 for instance, 629 (December running total) - 544 (November running total) = 85.
For some reason the **dateadd **function is not returning the desired result and I can't make head or tails of it.
Can you tell me how should I approach this issue please? Thank you in advance.

Logstash convert date duration from string to hours

I have a column like this:
business_time_left
3 Hours 24 Minutes
59 Minutes
4 Days 23 Hours 58 Minutes
0 Seconds
1 Hour
and so on..
What I want to do in Logstash is to convert this entirely into hours.
So mu value should entirety convert to something like
business_time_left
3.24
0.59
119.58
0
1
Is this possible?
My config file:
http_poller {
urls => {
snowinc => {
url => "https://service-now.com"
user => "your_user"
password => "yourpassword"
headers => {Accept => "application/json"}
}
}
request_timeout => 60
metadata_target => "http_poller_metadata"
schedule => { cron => "* * * * * UTC"}
codec => "json"
}
}
filter
{
json {source => "result" }
split{ field => ["result"] }
}
output {
elasticsearch {
hosts => ["yourelastuicIP"]
index => "inc"
action=>update
document_id => "%{[result][number]}"
doc_as_upsert =>true
}
stdout { codec => rubydebug }
}
Sample Json input data, when the url is hit.
{"result":[
{
"made_sla":"true",
"Type":"incident resolution p3",
"sys_updated_on":"2019-12-23 05:00:00"
"business_time_left":" 59 Minutes"} ,
{
"made_sla":"true",
"Type":"incident resolution l1.5 p4",
"sys_updated_on":"2019-12-24 07:00:00"
"business_time_left":"3 Hours 24 Minutes"}]}
Thanks in advance!
Q: Is this possible?
A: Yes.
Assuming your json- and split-filters are working correctly and the field business_time_left holds a single value like you showed (e.g. 4 Days 23 Hours 58 Minutes) I personally would do the following:
First, make sure that your data is in a kind of pattern meaning, you standardize the "quantity-descriptions". This means that the minutes are always labeled as "Minutes" not Mins, min or whatever.
Nextup, you can parse the field with the grok-filter like so:
filter{
grok{
match => { "business_time_left" => "(%{INT:calc.days}\s+Days)?%{SPACE}?(%{INT:calc.hours}\s+Hours)?%{SPACE}?(%{INT:calc.minutes}\s+Minutes)?%{SPACE}?(%{INT:calc.seconds}\s+Seconds)?%{SPACE}?" }
}
}
This will extract all available values into the desired fields, e.g. calc.days. The ? character prevents that grok fails if e.g. there are no seconds. You can test the pattern on this site.
With the data extracted, you can implement a ruby filter to aggregate the numeric values like so (untested though):
ruby{
code => '
days = event.get("calc.days")
hours = event.get("calc.hours")
minutes = event.get("calc.minutes")
sum = 0
if days
days_numeric = days.to_i
days_as_hours = days_numeric * 24
sum += days_as_hours
end
if hours
sum += hours.to_i
end
if minutes
sum += (minutes.to_i / 100)
end
# seconds and so on ...
event.set("business_time_left_as_hours", sum)
'
}
So basically you check if the values are present and add them to a sum with your custom logic.
event.set("business_time_left_as_hours", sum) will set the result as a new field to the document.
These code snippets are not intended to be working out of the box they are just hints. So please check the documentations about the ruby filter, ruby coding in general and so on.
I hope I could help you.

how to get only date string from a long string

I know there are lots of Q&As to extract datetime from string, such as dateutil.parser, to extract datetime from a string
import dateutil.parser as dparser
dparser.parse('something sep 28 2017 something',fuzzy=True).date()
output: datetime.date(2017, 9, 28)
but my question is how to know which part of string results this extraction, e.g. i want a function that also returns me 'sep 28 2017'
datetime, datetime_str = get_date_str('something sep 28 2017 something')
outputs: datetime.date(2017, 9, 28), 'sep 28 2017'
any clue or any direction that i can search around?
Extend to the discussion with #Paul and following the solution from #alecxe, I have proposed the following solution, which works on a number of testing cases, I've made the problem slight challenger:
Step 1: get excluded tokens
import dateutil.parser as dparser
ostr = 'something sep 28 2017 something abcd'
_, excl_str = dparser.parse(ostr,fuzzy_with_tokens=True)
gives outputs of:
excl_str: ('something ', ' ', 'something abcd')
Step 2 : rank tokens by length
excl_str = list(excl_str)
excl_str.sort(reverse=True,key = len)
gives a sorted token list:
excl_str: ['something abcd', 'something ', ' ']
Step 3: delete tokens and ignore space element
for i in excl_str:
if i != ' ':
ostr = ostr.replace(i,'')
return ostr
gives a final output
ostr: 'sep 28 2017 '
Note: step 2 is required, because it will cause problem if any shorter token a subset of longer ones. e.g., in this case, if deletion follows an order of ('something ', ' ', 'something abcd'), the replacement process will remove something from something abcd, and abcd will never get deleted, ends up with 'sep 28 2017 abcd'
Interesting problem! There is no direct way to get the parsed out date string out of the bigger string with dateutil. The problem is that dateutil parser does not even have this string available as an intermediate result as it really builds parts of the future datetime object on the fly and character by character (source).
It, though, also collects a list of skipped tokens which is probably your best bet. As this list is ordered, you can loop over the tokens and replace the first occurrence of the token:
from dateutil import parser
s = 'something sep 28 2017 something'
parsed_datetime, tokens = parser.parse(s, fuzzy_with_tokens=True)
for token in tokens:
s = s.replace(token.lstrip(), "", 1)
print(s) # prints "sep 28 2017"
I am though not 100% sure if this would work in all the possible cases, especially, with the different whitespace characters (notice how I had to workaround things with .lstrip()).

DocuSign document blank after request and save

After requesting a document via the DocuSign api and writing it to the file system it appears blank after opening it. The docs say it returns a "PDF File" and the response body is returned as below.
const doc =
await rp.get(`${apiBaseUrl}/${BASE_URI_SUFFIX}/accounts/${accountId}/envelopes/${envelopeId}/documents/${document.documentId}`,
{auth: { bearer: token }}
);
fs.writeFile(document.name, new Buffer(doc, "binary"), function(err) {
if (err) throw err;
console.log('Saved!');
});
Response body:
{
"documents": [
{
"name": "Name of doc.docx",
"content": "%PDF-1.5\n%\ufffd\ufffd\ufffd\ufffd\n%Writing objects...\n4 0 obj\n<<\n/Type /Page\n/Resources 5 0 R\n/Parent 3 0 R\n/MediaBox [0 0 612 792 ]\n/Contents [6 0 R 7 0 R 8 0 R 9 0 R 10 0 R ]\n/Group <<\n/Type /Group\n/S /Transparency\n/CS /DeviceRGB\n>>\n/Tabs /S\n/StructParents 0\n>>\nendobj\n5 0 obj\n<<\n/Font <<\n/F1 11 0 R\n/F2 12 0 R\n/F3 13 0 R\n>>\n/ExtGState <<\n/GS7 14 0 R\n/GS8 15 0 R\n>>\n/ProcSet [/PDF /Text ...
}
]}
Screenshot of document:
The EnvelopeDocuments::get API method returns the PDF itself, not an object as you are showing.
For a working example of the method, see example 7, part of the Node.js set of examples.
Added
Also, the fs.writeFile call supports writing from a string source. I'd try:
fs.writeFile(document.name, doc, {encoding: "binary"},
function(err) {
if (err) throw err;
console.log('Saved!');
});
Incorrect encoding
Your question shows the pdf's content as a string with the control characters encoded as unicode strings:
"%PDF-1.5\n%\ufffd\ufffd\ufffd\ufffd\n%Writing objects...
but this is not correct. The beginning of a PDF file includes binary characters that are not displayable except in a hex editor. This is what you should see at the top of a PDF:
Note the 10th character. It is hex c4. In your string, the equivalent character has been encoded as \ufffd (it is ok that they aren't the same character, they are two different PDFs). The fact that the character has been encoded is your problem.
Solutions
Convince the requests library and the fs.WriteFile methods to not encode the data. Or to decode it as needed. See this solution for the requests library.
Or use the DocuSign Node.js SDK as I show in the example code referenced above.

pd.to_datetime to solve '2010/1/1' rather than '2010/01/01'

I have a dataframe which contain a column 'trade_dt' like this
2009/12/1
2009/12/2
2009/12/3
2009/12/4
I got this problem
benchmark['trade_dt'] = pd.to_datetime(benchmark['trade_dt'], format='%Y-&m-%d')
ValueError: time data '2009/12/1' does not match format '%Y-&m-%d' (match)
how to solve it? Thanks~
Need change format for match - replace & and - to % and /:
benchmark['trade_dt'] = pd.to_datetime(benchmark['trade_dt'], format='%Y/%m/%d')
Also working with sample data removing format (but not sure with real data):
benchmark['trade_dt'] = pd.to_datetime(benchmark['trade_dt'])
print (benchmark)
trade_dt
0 2009-12-01
1 2009-12-02
2 2009-12-03
3 2009-12-04

Resources