i'm trying to setting up catalog replication staged->online I've created the sync job, that finishes successfully, but in online catalog nothing is created.
Any suggestion?
Thanks Marco
In general the SetupSyncJobService is used to create SyncJobs. Maybe its better to use that class to create your sync job?
Did you set the root types? Root types are all item types that should be copied from staged to online catalog. So if you want to copy products from staged to online catalog, there should be an entry "Product" in the root types list. The SetupSyncJobServices creates all root types for you so you don't need to bother. Perhaps you can compare the setup of your sync job to another sync job set up by the SetupSyncJobService and match the configuration to it.
Related
I've been considering turning on Databricks Unity Catalog in our primary (only) workspace, but I'm concerned about how this might impact our existing dbt loads with the new three-part object references.
I see from the dbt-databricks release notes that you need >= 1.1.1 to get unity support. The snippet with it only shows setting the catalog property in the profile. I was planning on having some of the sources in separate catalog's for the dbt generated objects.
I might even choose to have the dbt generated objects in separate catalogues if this was available.
As turning on Unity Catalog is a one way road in a workspace, I don't wish to wing it and see what happens.
Has anyone used dbt with Unity Catalog and used numerous catalogs in the project?
If so, are there any gotcha's and how do you specify the catalog for sources and specific models?
Regards,
Ashley
specifying 2-part object in schema indeed causes problems, at least in incremental models, instead specify catalog
sql-serverless:
outputs:
dev:
host: ***.cloud.databricks.com
http_path: /sql/1.0/endpoints/***
catalog: hive_metastore
schema: tube_silver_prod
threads: 4
token: ***
type: databricks
target: dev
Thanks Anton, I ended up resolving this. I created a temporary workspace to test it before applying to the main workspace. The catalog attribute can be applied almost anywhere you can specify the schema attribute, not just the profile.yml. I now have a dbt project which targets multiple catalog's. These are set in the dbt_project.yml at appropriate model level.
we are migrating from self hosted gitlab to gitlab.com subscription. We have parent 28 group and under these groups there are multiple subgroups and projects.
I know I can export one group and it will export all the subgroups under it and then I can import it.
but documentation says to export/import single project at a time. I have almost 3000+ Projects and doing this things 3000+ time is not possible.
Can you please suggest me How can I export/import all the projects under a group regardless it is in a group or under someother subgroup in hierarchy ?
or is there any other way ?
You will need to write scripts that interact with the GitLab API to perform the migration of groups/orgs, projects, branches, and merge requests. See the following post on GitLab's forum for guidance.
While creating workspace in perforce, I got below error
You should define workspace view in more detail. (minimum 2 depth)
This is not a standard Perforce error, and is therefore most likely coming from a custom trigger set up by your Perforce admin. In order to resolve a custom trigger failure you will need to consult with your Perforce administrator (i.e. the person who defined the trigger) to determine what conditions are required to satisfy the trigger.
If you would like to learn more about how to define triggers, see https://www.perforce.com/manuals/p4sag/Content/P4SAG/chapter.scripting.html
(this is not useful to you as an end user encountering a trigger failure, but may provide additional context on how triggers work from your admin's perspective).
The workaround/fix i got is,
make sure to select a folder in "Workspace root" , which has last two level of empty folders,
for example,
Suppose you choose "C:\Users\stackuser\workspace\project\codes", then make sure, "project and codes are two empty folders
I am using SVN (Subversion) which is a file versioning software, and stores the changes on code artifacts in the form of revisions.
I want to use the similar kind of software for DB2 objects like Tables and indexes, so that I can track changes over them using revisions.
Have anyone done this before?
You can use the IBM Data Studio client that creates a deployement group for database changes.
Once you create a deployement group, you can save it, and apply to several databases.
Take a look at
http://pic.dhe.ibm.com/infocenter/dstudio/v3r2/topic/com.ibm.datatools.deployment.manager.ui.doc/topics/c_deploy_mgr.html
I am new to Accurev. I am working on a workspace and I see a problem which I need help from a team member to look at the issue. what is the easiest way to let the other team member to access my files in my workspace through Accurev?
Using Subversion, anyone could checkout my branch and see my changes, build the code and reproduce the issue., But with Accurev I am not sure how to do the same.
Reading the Accurev documents, it looks like a workspace acts as a stream but cannot be shared between developers.
It's been a few months since I've used Accurev, but if I recall correctly, your team members can browse your workspace. If you're browsing your available streams using the GUI tool, there should be a setting at the bottom to show workspaces from all users. This will allow them to at least browse your workspace. If you want other developers to be able to change these files, you'll have to push them up into a stream. If necessary, you can always create a new stream and re-parent your workspace.
If you have kept any files in your workspace, another user can look at the "Version Browser" of that element in their workspace and see your kept version and look at your changes.
Or, you could create a new stream, reparent your workspace to this stream and promote your changes into the stream.