Repository URLs broken in Gitea after restoring a backup - gitea

I have restored a Gitea backup. Everything seems to be fine except that clicking on URLs in the WebUI opens a URL only containing the hostname or a wrong URL (it is missing the repository part of the URL).
If I create a new repository, I can access it as usual when clicking the URL. If I manually type the URL of repositories (e.g. gitea.my-domain.de/username/reponame.git) I can access it.
My first attempt was setting a ROOT_URL in app.ini, but this did not work. Any Ideas?

This just happened to me and it looks to be because in the gitea database the owner_name column from the repository table gets dumped as NULL. Interestingly the owner_id column keeps the correct value.
I am using mysql(mariadb) and did the following:
USE gitea;
SELECT id, name, owner_id, owner_name FROM repository;
Inspect this output just to make sure it looks sane.
UPDATE repository INNER JOIN `user` ON `user`.id = repository.owner_id SET owner_name = `user`.name;
This will lookup and replace the user_name based on the user_id from the user table.
SELECT id, name, owner_id, owner_name FROM repository;
Just to make sure it worked. Compare the first output to the second.
I checked for issues on the gitea repo of github and didn't see any matching these symptoms. I will open a issue there later and link this article.

I had a similar issue and managed to solve it with the help of the previous answer by a7hybnj2.
My instance is running with an SQLite database, for which the update command needs to look like this:
UPDATE repository
SET owner_name = (SELECT name
FROM `user`
WHERE (repository.owner_id = `user`.id))
WHERE EXISTS (SELECT *
FROM `user`
WHERE (repository.owner_id = `user`.id));
The modification is necessary as SQLite syntax does not supporting INNER JOIN within an UPDATE.

Related

SQLite database returning two different values for the same column

I'm currently making a NodeJS application and I'm using SQLite as the backend database. I've run into a problem when trying to soft-delete entries in a table. When I use the application to change the 'deleted' attribute, the changes appear in the SQLite CLI, but the application still displays each record as not deleted. These are the steps I use to test the application and database:
In SQLite CLI, call the createDB.sql script to delete all tables and
set them up again.
Call the populateDB.sql script to input test data into each of the tables
Check in SQLite CLI that the records are correct (SELECT id, deleted FROM table1;)
Check in application console that records are correct
In the application, change deleted attribute for a single entry
Output entry to console Console shows entry not deleted
In the application, change deleted attribute for all entries
Output entry to console Console shows entries not deleted
Check in SQLite CLI that the records are correct Output shows deleted attribute has changed for all records
Output to application console the deleted attribute Still shows all entries are not deleted
These are the steps I have take to try and resolve the issue:
The data type for the deleted field is BOOLEAN. SQLite doesn't have a specific boolean type, but converts it to 1 or 0. I have tried using 1 and 0, 'true' and 'false' and even 'yes' or 'no' with no change in the behaviour.
I have tried this using both relative and absolute file paths for the database.
I added time delays to the application in case there is a delay updating the database.
I looked into file locking and learned that SQLite prevents two processes accessing a file concurrently. Makes sense. I killed my CLI process and tried to update the deleted attribute from the application only, making sure it was the only thing connected to the database, but got the same result.
After all this testing I believe the application is writing to the actual database file, but is reading from a cache. Is this an accurate conclusion? If so, how do I tell the application to refresh it's cache after executing an update query?
The application was writing to the database file, and reading from the same database, not a cache. However my query was set up to SELECT * FROM table1 LEFT OUTER JOIN table2 LEFT OUTER JOIN table3 LEFT OUTER JOIN table4. table1, table2 and table3 all have a column called "deleted". I was updating table1.deleted but reading table2.deleted. I've amended the query to only select the columns I need from now on!

Content Cache dependency in Kentico V9

I want to update cached content of one custom table when another custom table item is updated.
Let's say I have two Custom Tables: Product and Order.
There are List and Edit pages for both Product and Order.
In DB there is a trigger on Product that updates some data on Order if Product is changed.
My scenario is, when I update lets say Product 1(one of the item of type Product), I want Orders (all orders) cache to refresh and reflect changes made in DB for Order. This is not happening right now.
Global settings are 10 minutes for content caching. But somehow it takes 20 minutes to reflect changes. Not sure why.
Also on Orders' CustomTableRepeater's System Settings->Cache minutes is set to 0 means it should not be caching content at all but it still does so I am at loss here
Answer to this scenario would be setting cache dependency dummy key as per Kentico documentation.
My questions are:
Do I set dependency key of all orders on Product's edit page's web part partial output cache dependency property?
for e.g. orders|all
Will this refresh all order records cached in for custom table data source when any product is modified?
Or I set dependency key of all Products on orders' repeater's System settings->Content Cache Dependency property?
for e.g. products|all
Please note Cache minutes property is set to 0 so ideally this content should not be cached.
Or add above key to Order's Edit page's webpart's partial output dependency?
Also for custom table how to get proper dummy key? Is it
products|all
OR
nodes|corportateside|products|all
OR
customtableitem.products|all
Or I need to add pages' dummy keys that I can see in debug->cache settings?
I have tried setting up all these things but nothing seem to work.
Any help is greatly appreciated.
Okay so it turned out to be a not cache issue.
I was able to resolve my issue. Putting answer here for future reference I will first list down what I tried:
Installed Hotfix
Add Partial Cache Dependency key
Add Cache dependency key for Content caching. Nothing worked.
Got an idea by reading answer from this questions: https://devnet.kentico.com/questions/kentico-8-2-database-caching
When I was updating CustomTable A's data, in DB trigger on A would update data in table B which I needed to refresh in Site's cache.
When I tried 'Clear Cache' from Debug application from Admin, it still did not update data in Site. Also my Custom Table data in Admin was also not getting updated.
So reading one answer from above question, I realized I need to refresh Hashtables for data to be refreshed in admin and subsequently in site.
So I added code to CustomTableForm.aspx.cs in OnAfterSave event handler. Here I am checking if current CustomTable is my table A, then refresh hashtables of B.
This worked.

How do I completely delete or change a custom field added to a base Acumatica page?

I've struggled with this issue my entire Acumatica career. The customer asks for a custom field made for a base Acumatica page. They give me the data type of the field. I create said field. The customer decides to change the spec which in turn changes the type on the field. I change the field type in Data Access, but it doesn't work. I change the field type in every place I know to find it (Data Access Class, .XML, CustObjects), the field still contains the same properties! I delete the field completely, run a clean publish to make sure it gets wiped and recreate the field with the same name. Nope! Some how the old type still remains assigned to that field. I have even completely deleted the project, ran a clean publish, recreated the project with the same name, ran another clean while empty, recreated the DAC object with the same name, ran ANOTHER clean publish, and it STILL has the old control type despite not being set in the project XML or CustObjects table.
How do I change the field type of a Usr field created via the Data Access section of the Customization Project?
My current solution is I just append a 2 to the name, so it becomes UsrFieldName2, which I really don't want to keep doing because this will mess up reports and anything else directly linked to that data field.

How to inactivate Users in maximo who don't use it anymore?

I am very new to maximo.
I wanted to know how to inactivate users who are not using maximo anymore. I tried googling this but I am not able to find enough material on Maximo.
I have to write a cron task to do that.
I saw this: http://www.ibm.com/support/knowledgecenter/SSZRHJ/com.ibm.mbs.doc/autoscript/t_cron_task_scripts.html
Can anyone give e a few pointers on how to write it, maybe a sample cron task?
What you're looking for is an Escalation. Escalations are instances of a special Cron Task that has been designed to use a query (a target object and a where clause) to find records and then to apply actions to and / or send emails from each record found.
You'll need to define the where clause against MAXUSER to locate the records you want to deactivate and find or define an Action to change the status of the records found. You can then hook the query and the action together via an Escalation.
The query below is for Oracle database. It looks at the logintracking table and gives you a list of users whose last interaction with Maximo is over 90 days ago:
select * from
(select max(attemptdate) lastLogout, loginid
from logintracking
where loginid <> 'MAXADMIN'
group by loginid
order by max(attemptdate) desc
)
where lastlogout < sysdate - 90
If this is a one-time change, you can manually inactivate the users using Security > Users. If you want to run this weekly, you'd have to change the query a bit to updates the status field in the maxuser table where the user status is still active but is still part of your inactivation list.
Best to experiment in a test environment to make sure your escalation is working properly.
You commented to +Sun that "Maximo is synchronized with LDAP". In that case, try this:
Update the UserMapping on the existing instance of the LDAPSYNC Cron Task to look for a flag that indicates that the user is active in LDAP. (Exactly what to look for will depend on your organization. Your LDAP administrators should be able to help you.) Then, make a second instance of the LDAPSYNC Cron Task that looks identical to the first, except that (1) the GroupMapping doesn't find any groups (use a condition like (objectName=DOES_NOT_EXIST) that won't find anything), and (2) the UserMapping looks for the flag indicating the user is not active and has {INACTIVE} mapped to the STATUS attribute of MAXUSER.

Broadleaf DemoSite Delete Products

I installed the DemoSite version of Broadleaf. When I try to delete a product from the /admin section. I get the following error message.
org.hibernate.exception.ConstraintViolationException: Cannot delete or update a parent row: a foreign key constraint fails (broadleaf.blc_product, CONSTRAINT FK5B95B7C96D386535 FOREIGN KEY (DEFAULT_SKU_ID) REFERENCES blc_sku (SKU_ID))
I understand that there is a foreign key constraint on SKU table. Shouldn't it auto delete SKU's related whenever I delete a product.
Even if not how can I delete the SKU's first. I tried deleting the product options first. But that dint helped either.
Quite an old post and don't know how relevant it is to you now but might help others. Apart from this workaround Broadleaf supports soft delete as well instead of hard delete.
You can archive a product and it won't show anywhere in admin & site. I found it useful since at times you might require a product again later on in the future and you can just get it back from archive state if you have soft deleted it.
Broadleaf has a column "ARCHIVED" in table BLC_PRODUCT. You just need to add flag "Y" to it in order to archive a product and later on remove it to recover the product back.

Resources