I am trying to find out how to get the price of bitcoin, ethereum, litecoin and bitcoin cash in AUD at a particular date.
I have a table as follows
+------------+-------+
| Date | Price |
+------------+-------+
| 16/03/2016 | |
| 19/04/2016 | |
| 03/12/2017 | |
+------------+-------+
I have tried entering using =IMPORTXML("http://coinmarketcap.com/currencies/bitcoin/","//span[#id='quote_price']") in the price column but it doesn't seem to work.
I would use:
https://min-api.cryptocompare.com/documentation?key=Historical&cat=dataPriceHistorical
In order to use this service you need an api key and register at their website:https://www.cryptocompare.com/cryptopian/api-keys
You also need to convert your date to unix timestamp:
https://www.unixtimestamp.com/index.php
e.g. 16/03/2016 => 1458086400
so ts=1458086400
The complete api call would resemble:
https://minapi.cryptocompare.com/data/pricehistoricalfsym=BTC&tsyms=AUD&ts=%201458086400&api_key="your-key"
executing this call gives = AUD: 582.52
The value in 'Kolom1' is de resulting value in AUD. This value can be retrieved by executing the url in 'URL' field : =WEBSERVICE([#URL])
Remark: to convert your dates 16/03/16 see post:
Excel date to Unix timestamp
Related
I'm trying to create a query in Application Insights that can show me the absolute and average number of messages in conversations over a particular time period. I'm using the LUIS trace example to get the context+LUIS information, which is where I'm pulling the conversationID from. I can get a table showing the number of messages per conversation, but I would also like to have a average number of messages for the data set. Either static average or rolling average (by pulling in timestamp) would be fine. I can get this value by doing a second summarize statement, but then I lose the granularity from the first. Here is my query.
requests
| where url endswith "messages"
| where timestamp > ago(30d)
| project timestamp, url, id
| parse kind = regex url with *"(?i)http://"botName".azurewebsites.net/api/messages"
| join kind= inner (
traces | extend id = operation_ParentId
) on id
| where message == "LUIS"
| extend convID = tostring(customDimensions.LUIS_botContext_conversation_id)
| order by timestamp desc nulls last
| project timestamp, botName, convID
| summarize messages=count() by conversation=convID
This gives me a table of conversation IDs with the message count for each conversation. I would also like to see the average number of messages per conversation. For example, if I have 4 conversations with 100 messages total, I want to see that the average is 25. I can get this result by doing a second summarize statement | summarize messages=sum(messages), avgMessages=avg(messages), but then of course I can no longer see the individual conversations. Is there any way to see both in the same table?
You can write 2 queries, one for "gives me a table of conversation IDs with the message count for each conversation", and another for " the average number of messages per conversation". And consider use Let statement for your query.
The tricky here is that, in both of the 2 queries, after the summarize statement, add this line of code at the end, like | extend myidentifier="aaa" .
Then you can join the 2 queries by using myidentifier.
I couldn't figure out how to do this without losing granularity from the first list (i.e. I couldn't figure out how to calculate average per period e.g. day), but the following query does at least get me the average across whatever timestamp filter I set, which ultimately gets me at the data I was looking for.
requests
| where url endswith "messages"
| where timestamp > ago(30d)
| project timestamp, url, id
| parse kind = regex url with *"(?i)http://"botName".azurewebsites.net/api/messages"
| join kind= inner (
traces | extend id = operation_ParentId
) on id
| where message == "LUIS"
| extend convID = tostring(customDimensions.LUIS_botContext_conversation_id)
| order by timestamp desc nulls last
| project timestamp, botName, convID
| summarize messages=count() by conversation=convID
| summarize conversations=count(), messageAverage=avg(messages)
I am writing kusto queries to analyze the state of the database when simple queries run for a long time.
For ex: data and type = SQL in dependencies is a sql server query. If its duration at timestamp 2019-06-24T16:41:24.856 is >= 15000 (>= 15 secs)
I would like to query and analyze the dtu_consumption_percent out of AzureMetrics from 2019-06-24T16:40:24.856 to 2019-06-24T16:42:24.856. ( 1 min before and 1 min after the query completion time) to determine the state of the database at that point in time.
Question: I wonder if anyone can give me pointers on getting the database name out of the target column from dependencies?
target looks as below:
tcp:sqlserver-xxx-xxxxxx.database.windows.net | DDDDD
and I am needing to extract DDDDD to join to AzureMetrics column Resource.
Thank you!
As Yoni says you can use parse, or you could use substring:
let T = datatable(Value:string) [
'tcp:sqlserver-xxx-xxxxxx.database.windows.net | DDDDD',
'udp:appserver-yyy-yyyyyy.database.contoso.com | EEEEE'
];
T
// Look for the pipe and take everything after it as the value
| extend ToSubstring = substring(Value, indexof(Value, "|")+1)
https://learn.microsoft.com/en-us/azure/kusto/query/substringfunction
However if you find yourself doing this a lot you may want to take a look at Custom Fields:
https://learn.microsoft.com/en-us/azure/azure-monitor/platform/custom-fields
You could use the parse operator:
https://learn.microsoft.com/en-us/azure/kusto/query/parseoperator
print value = 'tcp:sqlserver-xxx-xxxxxx.database.windows.net | DDDDD'
| parse value with * "| " database
this returns:
| value | database |
|-------------------------------------------------------|----------|
| tcp:sqlserver-xxx-xxxxxx.database.windows.net | DDDDD | DDDDD |
I need to convert timestamp '1998/02/12 00:00:00 to data 1998-02-12' using Cassandra query. Can anyone help me on this.
Is it possible or not?
You can use toDate function in cql to get date out of datetime. \
For example, if your table entry looks like:
id | datetime | value
-------------+---------------------------------+-------
22170825421 | 2018-02-15 14:06:01.000000+0000 | 50
You can run the following query:
select id, datetime, toDate(datetime) as day, value from datatable;
and it will give you:
id | datetime | day | value
-------------+---------------------------------+------------+-------
22170825421 | 2018-02-15 14:06:01.000000+0000 | 2018-02-15 | 50
You can't do it directly in the Cassandra, as it accepts data as YYYY-mm-dd, so you need to use some other method (depending on language that you're using) to convert your string into this format.
I'm using 2-D arraylist to store the query results of sql.rows in SOAP UI Groovy. Outputrows, in the below code, is an arraylist.
Outputrows = sql.rows("select CORR.Preferred as preferred ,CORR.Category as category,CORR.Currency as currency\
from BENEFICIARY CORR \
JOIN LOCATION LOC on CORR.UID=LOC.UID")
The problem with the arraylist is that I'm unable to update any value of a particular cell with the set command. Set is not a valid one for GroovyRowResult class.
Outputrows.get(row).set(col,categoryValue)
So I am just wondering if I can store the queryresults(Outputrows) to a 2D Map(Outputrows) and if so, how can I update the value of any particular row with the given map key.
[{'preferred': 'N', 'category': 'Commerical'}, {'currency': 'USD'}.. ] and so on.
If I want to update Currency for the 3rd row, how can I update that.
Data in the output
Preferred | Category | Currency |
----------------------------------
N | CMP | USD |
----------------------------------
Y | RTL | GBP |
----------------------------------
N | CMP | JPY |
----------------------------------
Y | RTL | USD |
----------------------------------
Now here in 'outputrows' the values are stored from first row(N, CMP, USD) as Arraylist. I would like to store the values of the query result,'outputrows', as a Maps instead of Arraylist, so I can easily access any value in 'outputrows'
with Map key.
Hope this makes sense.
I need to use the column name with put instead of the column number.
Outputrows.get(row).put("currency",categoryValue) .. this is correct
Outputrows.get(row).put(2,categoryValue).. adds a new column with name "2", instead of column reference to currency
I'm looking for some thoughts on how you might recreate a 'vlookup' that I currently do in excel.
I have two tables: Data contains a list of datetime values; DateConverter; contains a list of calendar dates and their associated "network dates." Imagine for a business - not every day is a workday, so if I want to calculate differences in dates, I'm most interested in the number of work days that elapsed between my two dates.
Here is what the data might look like:
Data Table DateConverter Table
================= ===================
| Datetime | | Calendar date | Netowrk date |
| ------------- | | ------------- | ------------ |
| 6-1-15 8:00a | | 6-1-15 | 1000 |
| 6-2-15 1:00p | | 6-2-15 | 1001 |
| 6-3-15 7:00a | | 6-3-15 | 1002 |
| 6-10-15 3:00p | | 6-4-15 | 1003 |
| 6-15-15 1:00p | | 6-5-15 | 1004 |
| 6-12-15 2:00a | | 6-8-15 | 1005 | // Skips the weekend
| ... | | ... | ... |
In excel, I can easily map in the network date for each date in the Datetime field with a variant of vlookup:
// Assume that Datetime values are in Column A, Calendar date values in
// Column C, Network date values in Column D - this formula fills Column B
// Headers are in row 1 - first values are in row 2
B2=OFFSET($D$1,COUNTIFS($C:$C,"<"&A2),)
The formula counts the dates that are less than the lookup value (using countifs because the values in the search array are dates, and the search value is datetime) and returns the associate network date.
Is there a way to do this in Tableau? Will it require a calculated field or can I do this with some kind of join?
Thanks in advance for the help! Let me know if there is anything I can clarify. Thanks!
If the tables are on the same data server, you have the option to use joins, which is usually the most efficient way to combine information from different tables. If the tables are on different servers or platforms, then you can't use a single query to join them.
In either case, you can use Tableau data blending, which is sort of like a client-side join of aggregated results from multiple queries. Its a pretty useful technique, but a little more complex and restricted and also usually less efficient than a server side join.
So if you have the option to have both tables on the same server, start with that. It will be simpler and likely faster.
Note if you are going to use a date as a join key, you probably want to define it is a date and not a datetime.
#alex-blakemore's response would normally be adequate, but if you can change the schema, you could simply add the network date to the DataTable. The hourly granularity should not cause excessive growth and you don't need to navigate the joining.
Then, instead of counting rows and requiring a sorted table, simply subtract the Network date from each other and add 1.