I my beginner to PostgresQL,Can u please help me in the below issue
I have master and detail table like this
enter image description here
if image is not clear,I am giving sample like this
master table detail table
----------- --------------
dept_no dept_name dept_no emp_no emp_name emp_desig
1 Marketing 1 E001 saritha Sales Manager
2 R&D 1 E002 latha Sales Executive
3 HR 2 E003 veena Coder
4 IT 3 E004 geetha Manager
5 Testing 3 E005 Kavin Field Officer
I need the result something like this below,which should include dept_name having null values also,
dept_name emp_name
Marketing saritha,latha
R&D veena
HR geetha,kavin
IT
Testing
Can anybody help me in this query,Thanks in advance.
A possible solution:
select dept_name, string_agg(emp_name,',')
from master
left join detail
on master.dept_no = detail.dept_no
group by master.dept_no, dept_name
order by master.dept_no;
dept_name | string_agg
-----------+---------------
Marketing | latha,saritha
R&D | veena
HR | kavin,geetha
IT |
Testing |
(5 rows)
Related
So, I was working with GridDB NodeJs Connector, I know the query to find out the null values which shows the records/rows:
SELECT * FROM employees where employee_salary = NaN;
But I want to replace the null values of the column with the mean value of the column, in order to maintain the data consistency for data analysis. How do I do that in GridDB?
The Employee table looks like the following:
employee_id employee_salary first_name department
---------------+---------------+--------------+--------------
0 John Sales
1 60000 Lisa Development
2 45000 Richard Sales
3 50000 Lina Marketing
4 55000 Anderson Development
I have the list as below:
List 1:
CustomerCode | CustomerName | ProductName | ProductCode | ManufactureDate | ExpiryDate | ProductPrice
*CustomerCode and CustomerName can have duplicate values
List 2:
CustomerCode | AmountPurchased
What I want to achieve:
I want to automatically populate Sharepoint List 2 with Power automate.
Condition:
CustomerCode cannot be duplicated.
I want to group List 1 by CustomerCode and add the AmountPurchased by the customer and fill in List 2.
I am new to Power Automate. Your help and assistance is highly appreciated. Thanks !!
https://powerusers.microsoft.com/t5/General-Power-Automate/Sum-a-column-from-one-sharepoint-list-with-a-condition-to/td-p/122078
Well documented already in this post :-)
I have a table of bugs that I want to create a line graph visual on:
| Id | Created Date | Closed Date |
|----|--------------|-------------|
| 1 | 01/01/2020 | 01/02/2020 |
| 2 | 01/01/2020 | 01/03/2020 |
| 3 | 02/01/2020 | |
I want to create a line chart visual that shows per day how many bugs were created and how many were closed cumulatively (running total) using two lines.
Is it possible to create this from the one table (using two y-axis)? Do I need another table for the dates and what is the best way to create the relationship?
This would be a great use case for a simple measure.
Running Total MEASURE =
CALCULATE (
COUNT( 'Table'[ID] ),
FILTER (
ALL ( 'Table' ),
'Table'[Created Date] <= MAX ( 'Table'[Created Date] )
)
)
In the above DAX expression, simply plugin your created date and the bug ID where appropriate. Basically, it is counting the instance of each ID that occurs on or before every created date.
Let me know if this helps
I understand that this is not possible using an UPDATE.
What I would like to do instead, is migrate all rows with say PK=0 to new rows where PK=1. Are there any simple ways of achieving this?
For a relatively simple way, you could always do a quick COPY TO/FROM in cqlsh.
Let's say that I have a column family (table) called "emp" for employees.
CREATE TABLE stackoverflow.emp (
id int PRIMARY KEY,
fname text,
lname text,
role text
)
And for the purposes of this example, I have one row in it.
aploetz#cqlsh:stackoverflow> SELECT * FROM emp;
id | fname | lname | role
----+-------+-------+-------------
1 | Angel | Pay | IT Engineer
If I want to re-create Angel with a new id, I can COPY the table's contents TO a .csv file:
aploetz#cqlsh:stackoverflow> COPY stackoverflow.emp TO '/home/aploetz/emp.csv';
1 rows exported in 0.036 seconds.
Now, I'll use my favorite editor to change the id of Angel to 2 in emp.csv. Note, that if you have multiple rows in your file (that don't need to be updated) this is your opportunity to remove them:
2,Angel,Pay,IT Engineer
I'll save the file, and then COPY the updated row back into Cassandra FROM the file:
aploetz#cqlsh:stackoverflow> COPY stackoverflow.emp FROM '/home/aploetz/emp.csv';
1 rows imported in 0.038 seconds.
Now Angel has two rows in the "emp" table.
aploetz#cqlsh:stackoverflow> SELECT * FROM emp;
id | fname | lname | role
----+-------+-------+-------------
1 | Angel | Pay | IT Engineer
2 | Angel | Pay | IT Engineer
(2 rows)
For more information, check the DataStax doc on COPY.
I have a pivot table fed from a MySQL view. Each returned row is basically an instantiation of "a person, with a role, at a venue, on a date". The each cell then shows count of person (lets call it person_id).
When you pivot this in excel, you get a nice table of the form:
| Dates -->
--------------------------
Venue |
Role | -count of person-
This makes a lot of sense, and the end user likes this format BUT the requirement has changed to group the columns (date) into a week.
When you group them in the normal way, this count is then applied in columns as well. This is, of course, logical behaviour, but what I actually want is max() of the original count().
So the question: Does anyone know how to have cells count(), but the grouping perform a max()?
To illustrate this, imagine the columns for a week. Then imaging the max() grouped as a week, giving:
Old:
| M | T | W | T | F | S | S ||
--------------------------------------- .... for several weeks
Venue X |
Role Y| 1 | 1 | 2 | 1 | 2 | 3 | 1 ||
New (grouped by week)
| Week 1 | ...
---------------------------
Venue X |
Role Y| 3 | ...
I'm not on my pc, but the steps below should be broadly correct:
You should be able to right click on the date field on pivot table and select group.
Then highlight week, you may have to select year also.
Lastly right click on the count data you already have and expand the summarise by, and select max.