Does anyone know what is the correct way to change domain name in WLS 12c?
After changed domain name i have this error:
<2015-03-03 15:51:55 CET> <Critical> <JTA> <BEA-110482> <A logging last resource (LLR) failed during initialization. The server cannot boot unless all configured LLRs initialize. Failing reason:
javax.transaction.SystemException: Failed to call > registerLoggingResourceTransactions() weblogic.transaction.loggingresource.LoggingResourceException: weblogic.transaction.loggingresource.LoggingResourceException: java.sql.SQLException: JDBC LLR, table verify failed for table >'CS_CMS.WL_LLR_MYSERVER', row 'JDBC LLR Domain//Server' record had unexpected value 'aaa//myserver' expected 'bbb//myserver'* ONLY the original domain and server that creates an LLR table may access it *
Could anyone tell how can I fix this issue?
Renaming a weblogic domain is not as simple as renaming a folder, check the following:
Tons of files within your domain folder are going to reference the domain name, do a grep -r your_domain * and you will see where it's referenced. You can exclude tmp, cache, etc. like --exclude-dir={tmp,logs,cache}
After looking at the above, you can do something like xargs sed –I ‘s/your_domain/new_domain/g’ on all the files containing the old name
Last, in regards to the error you're seeing, Weblogic keeps a LLR table with a single row that includes the path to the domain. Update that table with the new/path (see the Oracle link below)
See this Oracle doc with regards to that table
See this example on changing the domain name. Note this example does not include modifying the LLR table.
Related
I don't see a way to do so, could anyone know how to archive in spanner?
drop index testindex1 if exists
Our scenario:
On day 10, we created an index testindex1, and this change (schema file) may get deployed to some or all production environments
On day 30, we decided we actually don't need testindex1, so we would like to drop it if it is there. We are not sure which production databases it has been created in.
Is there a way to ignore the not-found error in the middle when running a batch of DDL statements?
DROP ... IF EXISTS is not supported in Cloud Spanner.
What is the use case of this? Can you ignore the not-found error from the drop?
Clarification: the directory /george/ exists when I am given the task, and I do know how to read the attribute in cli showing that it is a directory. The balance of the path requested does not exist, so the only way I can tell if 'poppet' is a file or directory is by convention. This question comes from an issue I had with interpretation of this line, so am interested in knowing if there is a convention. This occurred using Ubuntu, but would be relevant in any distro, methinks.
I'm new to Solr and received the following error when adding a document through pysolr:
pysolr.SolrError: Solr responded with an error (HTTP 400): [Reason: ERROR: [doc=bc4aa768-6f35-4888-80e0-1578d9971b3c] Error adding field 'periodical_nlm'='2984692R' msg=For input string: "2984692R"]
I ended up finding out that the first periodical_nlm value added was 404536.0, so I assumed it was a type issue. In Python I then cast every periodical_nlm explicitly to string before adding 2984692R. However, the error persisted.
I Googled a bit and found that I should probably explicitly tell Solr that I want that field to be a string. I've not gotten very "hands on" with the schema yet, so I just had some questions:
(1) There appear to be two schema files: managed-schema in the directory for the core and managed-schema in the conf folder of the core. I'm assuming that the initialized schema which is in use is the one in the conf folder?
(2) Which do I update in order for things to proceed smoothly? I attempted adding the following to the schema file in the core directory but the error persisted:
field name="periodical_nlm" type="string" indexed="true" stored="true" required="false" multiValued="false" />
Do I need to rerun some initialization process or add something to the conf file separately?
Thank you so much and please let me know if you need more info. I'm running on a Windows 10 Home x64 platform (not sure if that's important if there are any command-line things I need to run...).
As long as you reload the core after changing the managed-schema file under conf, you should be fine. Be aware that you should do this before indexing content - so you might need to clean out the index by deleting everything, then changing the schema and re-indexing your content. Changing the schema does not change content that has already been indexed.
Otherwise your assumption is correct, and the schemaless mode (where the type is determined by the format of the first value submitted (not the type - as that's usually not included in any way, all values are just strings when being submitted, so Solr attempts to guess the type by applying a hierarchy of pattern matching)) is useful for prototyping - when you're moving to production you should always define the schema explicitly to avoid issues like you've seen here.
I'm trying to test the Azure Data Warehouse. I successfully created and connected to the database, but I've run into a snag as I attempt to load the tables. I'm trying to execute the following instructions:
To install AdventureWorksSQLDW2012:
-----------------------------------
4. Extract files from AdventureWorksSQLDW2012.zip file into a directory.
5. Edit aw_create.bat setting the following variables:
a. server=<servername> from step 1. e.g. mylogicalserver.database.windows.net
b. user=<username> from step 1 or another user with proper permissions
c. password=<passwordname> for user in step 5b
d. database=<database> created in step 1
e. schema=<schema> this schema will be created if it does not yet exist
6. Run aw_create.bat from a cmd prompt, running from the directory where the files were unzipped to.
This script will...
a. Drop any Adventure Works tables or views that already exist in the schema
b. Create the Adventure Works tables and views in the schema specified
c. Load each table using bcp
d. Validate the row counts for each table
e. Collect statistics on every column for each table
I completed the prerequisites of installing bcp and sqlcmd and used the -? command to confirm the installations.
Unfortunately, when I try to complete step 6 above I get the following error:
REM AdventureWorksSQLDW2012 sample database version 3.0 for DW Service Tue 06/27/2017 20:31:01.99 Bcp must be installed.
Has anyone else come across this error or can anyone suggest a potential solution.
UPDATE: I've also copied the path where BCP is located to my path environment variables. Still no luck.
The aw_create.bat contains a line where you need to provide the path of the bcp program. Once provided ans save the script worked like a charm.
I am getting an import error in a specific environment with a managed CRM 2011 solution. The solution has been imported before into many other environments, but the one in particular where it is failing is throwing the following error:
Dependency Calculation
role With Id = 9e2d2d9b-645f-409f-b31d-3a9c39fcc340 Does Not Exist
I am a bit confused about this. I searched within the solution XML and was not able to find any reference to this particular GUID of 9e2d2d9b-645f-409f-b31d-3a9c39fcc340. I cannot really find it in SQL either, just wandering through the different tables, but perhaps I do not know exactly where to look there.
I have tried importing the solution multiple times. As a desperation effort, I tried renaming all of the security roles in the destination environment prior to importing, but this did not help.
Where is this reference to a security role actually stored? Is this something that is supposed to be within my solution--which my existing CRM deployment is expecting me to import?
How do I fix the problem so that I am able to import this solution?
This is the code we used to fix the issue. We had to run two different scripts. Script A we had to run a total of four times. Run it once, attempt the import, and then consult the log to find the role that is causing the problem--if you receive another error for another role.
To run script A, you must use a valid RoleTemplateId from your database. We just picked a random one. It should not matter precisely which one you use, because you will erase that data element with script B.
After all of the roles are fixed, we got a different error (complaining something about the RoleTemplateId was already related to a role), and had to run script B. That removes the RoleTemplateId from multiple different roles and sets it to NULL.
Script A:
insert into RoleBaseIds(RoleId)
values ('WXYZ74FA-7EA3-452B-ACDD-A491E6821234')
insert into RoleBase(RoleId
,RoleTemplateId
,OrganizationId
,Name
,BusinessUnitId
,CreatedOn
,ModifiedOn
,CreatedBy
)
values ('WXYZ74FA-7EA3-452B-ACDD-A491E6821234'
,'ABCD89FF-7C35-4D69-9900-999C3F605678'
,(select organizationid from Organization)
,'ROLE IMPORT FIX'
,(select BusinessUnitID from BusinessUnit where ParentBusinessUnitId is null)
,GETDATE()
,GETDATE()
,null
)
Script B:
update RoleBase
set RoleTemplateId = NULL
where RoleTemplateID='ABCD89FF-7C35-4D69-9900-999C3F605678'
Perfect solution, worked for me! My only comment would be the error in Script B: it shouldn't clear the template IDs of all roles for the given template, only the template ID of the newly created "fix" role, as follows:
update RoleBase
set RoleTemplateId = NULL
where RoleID='WXYZ74FA-7EA3-452B-ACDD-A491E6821234'
I would've gladly put this in a comment to the answer, but not enough rep as of now.