I have been stuck on this problem for a while now and cannot figure it out. Hoping someone can help me.
I think my situation is pretty simple, so I feel extra stupid for having to post this Nonetheless -- I have a database, lets call it tempdb, that was created by user ikaros on Postgres 13.3 (Ubuntu 13.3-1.pgdg16.04+1)
Here is the output from \l+ with irrelevant information omitted.
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges | Size | Tablespace | Description
-----------------------+----------+----------+-------------+-------------+-----------------------+---------+------------+--------------------------------------------
...
ikaros | ikaros | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 8029 kB | pg_default |
tempdb | ikaros | UTF8 | C | C | =T/ikaros +| 13 GB | pg_default |
| | | | | ikaros=CTc/ikaros +| | |
| | | | | johndoe=CTc/ikaros | | |
...
Currently, johndoe can connect to the database tempdb, but when executing a query, gets a message about not having sufficient table level privilege's. Error: Unable to execute query: Fatal Error; Reason: Error: (ERROR: permission denied for table settings )
I want johndoe to have full read privilege's on the tempdb along with all tables inside. How can I go about that? Thanks in advance!
According to Postgres documents, You can use below queries to add permission to users:
-- This query used for access to the database
grant connect on database [YOUR_DATABASE] to [USERNAME];
-- This query used for access to the schema
grant usage on schema [SCHEMA1, SCHEMA2, ...] to [USERNAME];
-- This query is used for access to all tables of a schema (or can use just **select** instead of **all**
grant all on all tables in schema [SCHEMA1, SCHEMA2, ...] to [USERNAME];
grant select on all tables in schema [SCHEMA1, SCHEMA2, ...] to [USERNAME];
-- If need add select permission to a specific table
grant select on table [YOUR_SCHEMA].[YOUR_TABLE] to [USERNAME];
Related
[Question posted by a user on YugabyteDB Community Slack]
Does renaming the table, existing partitions attached to that table remain as it is after renaming?
Yes.
yugabyte=# \dt
List of relations
Schema | Name | Type | Owner
--------+-----------------------+-------+----------
public | order_changes | table | yugabyte
public | order_changes_2019_02 | table | yugabyte
public | order_changes_2019_03 | table | yugabyte
public | order_changes_2020_11 | table | yugabyte
public | order_changes_2020_12 | table | yugabyte
public | order_changes_2021_01 | table | yugabyte
public | people | table | yugabyte
public | people1 | table | yugabyte
public | user_audit | table | yugabyte
public | user_credentials | table | yugabyte
public | user_profile | table | yugabyte
public | user_svc_account | table | yugabyte
(12 rows)
yugabyte=# alter table order_changes RENAME TO oc;
ALTER TABLE
yugabyte=# \dS+ oc
Table "public.oc"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
-------------+------+-----------+----------+---------+----------+--------------+-------------
change_date | date | | | | plain | |
type | text | | | | extended | |
description | text | | | | extended | |
Partition key: RANGE (change_date)
Partitions: order_changes_2019_02 FOR VALUES FROM ('2019-02-01') TO ('2019-03-01'),
order_changes_2019_03 FOR VALUES FROM ('2019-03-01') TO ('2019-04-01'),
order_changes_2020_11 FOR VALUES FROM ('2020-11-01') TO ('2020-12-01'),
order_changes_2020_12 FOR VALUES FROM ('2020-12-01') TO ('2021-01-01'),
order_changes_2021_01 FOR VALUES FROM ('2021-01-01') TO ('2021-02-01')
Postgres and therefore YugabyteDB doesn’t actually use the names of an object, it uses the OID (object ID) of an object.
That means that you can rename it, without actually causing any harm, because it’s simply a name in the catalog with the object identified by its OID.
This has other side effects as well: if you create a table, and perform a certain SQL like ‘select count(*) from table’, drop it, and then create a table with the same name, and perform the exact same SQL, you will get two records in pg_stat_statements with identical SQL text. This seems weird from the perspective of databases where the SQL area is shared. In postgres, only pg_stat_statements is shared, there is no SQL cache.
pg_stat_statements does not store the SQL text, it stores the query tree (an internal representation of the SQL), and symbolizes the tree, which makes to appear like SQL again. The query tree uses the OID, and therefore for pg_stat_statements the above two identical SQL texts are different query trees, because the OIDs of the tables are different.
I have problem - probably with postgres permissions. I created a user and granted role to him:
CREATE USER test PASSWORD 'abc';
GRANT pg_monitor TO test;
Next I want to login as test user and run query:
select * from pg_stat_progress_vacuum;
But unfortunately user does not have sufficient permissions and i can't see some info from this select (for example i can't see relid info)
Sample output:
pid | datid | datname | relid | phase | heap_blks_total | heap_blks_scanned | heap_blks_vacuumed | index_vacuum_count | max_dead_tuples | num_dead_tuples
-------+-----------+------------------+-------+-------+-----------------+-------------------+--------------------+--------------------+-----------------+-----------------
1241 | 213123214 | database1 | | | | | | | |
PS. I have PostgreSQL 12.2
You say you can't see the pid, but the example you show is clearly showing the pid. I assume it the other 8 columns, starting with "relid", that you can't see.
This works for me. When pg_monitor is granted and a vacuum is running, I see data in all columns, and when it not granted I see the final 8 columns being NULL.
This looks like some kind of user error, like you are connecting as the wrong user, or to the wrong server, or the GRANT is not actually being executed.
When trying to implement Django Social, I think i missed a migration somewhere and now when I get a twitter redirect to the site I get the following error.
Exception Value: (1054, "Unknown column 'social_auth_usersocialauth.created' in 'field list'")
I can see the table has been created, and two values aren't there in the database table:
mysql> describe social_auth_usersocialauth;
+------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------+--------------+------+-----+---------+----------------+
| id | int | NO | PRI | NULL | auto_increment |
| provider | varchar(32) | NO | MUL | NULL | |
| uid | varchar(255) | NO | | NULL | |
| extra_data | longtext | NO | | NULL | |
| user_id | int | NO | MUL | NULL | |
+------------+--------------+------+-----+---------+----------------+
5 rows in set (0.17 sec)
I want to run a custom migration to add the two fields that are missing from an update to the social auth migration,
class Migration(migrations.Migration):
dependencies = [
('dbdisplay', '0001_initial'),
('social_django', '0008_partial_timestamp'),
]
operations = [
migrations.AddField(
model_name='usersocialauth',
name='created',
field=models.DateTimeField(auto_now_add=True, default=mytz.now),
preserve_default=False,
),
migrations.AddField(
model_name='usersocialauth',
name='modified',
field=models.DateTimeField(auto_now=True),
),
]
But migrations don't understand the model I am referring to, because there is an error when I run the migration:
KeyError: ('social_django', 'association')
How to used AddField in a migration where the table is not in the app's namespace?
Issue is due to old version of python-social-auth (in that case social-core and social-app-django). The social_auth_usersocialauth table is missing the following fields:
created = models.DateTimeField(auto_now_add=True)
modified = models.DateTimeField(auto_now=True)
objects = UserSocialAuthManager()
Update your requirements.txt or pip install the latest versions.
social-auth-app-django==5.0.0
social-auth-core==4.1.0
Then
pip install -r requirements.txt
And run the migrate command in django
python manage.py migrate
Afterwards check your DB table and then you will find the missing columns.
In cassandra (am using DSE),
how do I check how many users are connected to the database? Any way to check node wise?
Is there any auditing info stored which will tell me which all users connected along with info such as IP address and driver used etc?
In Opscenter there is a metric called "Native clients", where is this info stored in the db to query for? Does this include internal communication between the nodes and backups etc?
How do I check how many users are connected to the database? Any way to check node wise?
Is there any auditing info stored which will tell me which all users connected along with info such as IP address and driver used etc?
DSE has a performance service feature which you can enable to make this information available via cql. To enable this particular capability, configure the following in dse.yaml as described in the docs:
user_level_latency_tracking_options:
enabled: true
With this enabled, you can now query a variety of tables, for example:
cqlsh> select * from dse_perf.user_io;
node_ip | conn_id | last_activity | read_latency | total_reads | total_writes | user_ip | username | write_latency
-----------+-----------------+---------------------------------+--------------+-------------+--------------+-----------+-----------+---------------
127.0.0.1 | 127.0.0.1:55116 | 2019-01-14 14:08:19.399000+0000 | 1000 | 1 | 0 | 127.0.0.1 | anonymous | 0
127.0.0.1 | 127.0.0.1:55252 | 2019-01-14 14:07:39.399000+0000 | 0 | 0 | 1 | 127.0.0.1 | anonymous | 1000
(2 rows)
cqlsh> select * from dse_perf.user_object_io;
node_ip | conn_id | keyspace_name | table_name | last_activity | read_latency | read_quantiles | total_reads | total_writes | user_ip | username | write_latency | write_quantiles
-----------+-----------------+---------------+------------+---------------------------------+--------------+----------------+-------------+--------------+-----------+-----------+---------------+-----------------
127.0.0.1 | 127.0.0.1:55252 | s | t | 2019-01-14 14:07:39.393000+0000 | 0 | null | 0 | 1 | 127.0.0.1 | anonymous | 1000 | null
127.0.0.1 | 127.0.0.1:55116 | s | t | 2019-01-14 14:08:19.393000+0000 | 1000 | null | 1 | 0 | 127.0.0.1 | anonymous | 0 | null
Note that there is a cost to enabling the performance service, and it can be enabled and disabled selectively using dsetool perf userlatencytracking [enable|disable].
In a future release of Apache Cassandra (4.0+) and DSE (likely 7.0+), there will be a nodetool clientstats command (CASSANDRA-14275), and a corresponding system_views.clients table (CASSANDRA-14458) that includes connection info. This will include the driver name, if the driver client provides one (newer ones do).
In Opscenter there is a metric called "Native clients", where is this info stored in the db to query for? Does this include internal communication between the nodes and backups etc?
I'm not too up to speed on OpsCenter. From what I know OpsCenter typically stores it's data in the OpsCenter keyspace, you can configure data collection parameters by following this doc.
considering I have core data objects stored like this:
|Name | ActionType | Content | Date |
|-----|------------|---------|-----------|
|Abe | Create | "Hello" | 2014-07-01|
|Cat | Create | "Well" | 2014-07-01|
|Abe | Create | "Hi" | 2014-07-02|
|Bob | Edit | "Yo" | 2014-07-03|
|Cat | Delete | "What" | 2014-07-04|
|Abe | Edit | "Haha" | 2014-07-05|
I would like to get the last action of each user, so the results would be
|Abe | Edit | "Haha" | 2014-07-05|
|Cat | Delete | "What" | 2014-07-04|
|Bob | Edit | "Yo" | 2014-07-03|
Does anyone knows how to do that with a NSFetchRequest? So far from what I've gathered, if you want to use "group by", you can only retrieve the values in the group by cause (it will return "Abe, Cat, Bob" without the rest of the data in the core data object). Similar with "returnsDistinctResults", it will not return the whole object.
I have a feeling that core data is not equipped for that, any helps & hints would be appreciated!
Core Data is an object graph, not a database. Core Data itself has no concept of uniqueness, it's up to you to implement that in your application. This is most typically done using the find or create pattern. This pattern helps you prevent duplicate objects from being stored.
That said, you CAN return distinct results from Core Data using the NSDictionaryResultType. This will not prevent duplicates from being stored, but can be used to return distinct results from a fetch. There is an example of this in the programming guide. You can give this request all properties for a given entity by working with the NSEntityDescription of the managed object you are fetching.
For getting the object with the "last" timestamp for each, you actually want to get the object with the maximum value for that key path. That can be done a number of ways - a subquery, key path operators, expressions, etc.