I am running Liferay 6.2 on WebLogic 12c server.
Out of nowhere it just stopped working.
This is the last thing I see before it throws a flurry of exceptions
<Jan 10, 2014 2:53:28 PM EST> <Notice> <LoggingService> <BEA-320400> <The log fi
le C:\Oracle_2\Middleware\user_projects\domains\liferay\servers\AdminServer\logs
\AdminServer.log will be rotated. Reopen the log file if tailing has stopped. Th
is can happen on some platforms, such as Windows.>
<Jan 10, 2014 2:53:28 PM EST> <Notice> <LoggingService> <BEA-320401> <The log fi
le has been rotated to C:\Oracle_2\Middleware\user_projects\domains\liferay\serv
ers\AdminServer\logs\AdminServer.log00369. Log messages will continue to be logg
ed in C:\Oracle_2\Middleware\user_projects\domains\liferay\servers\AdminServer\l
ogs\AdminServer.log.>
The errors are shown here http://www.pastebin.ca/2532946
Anyone have any ideas on this?
As you can see in the log files (see below excerpt of your log file), Liferay is not able to either get a handle to the HSQL database or the HSQL db might be corrupted when you updated it.
13:11:16,769 WARN [C3P0PooledConnectionPoolManager[identityToken->uArzPQ2m]-HelperThread-#4][BasicResourcePool:1851] com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask#933b16 -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (3). Last acquisition attempt exception:
java.sql.SQLException: error in script file line: 15 unexpected token: AVG
So you need to answer below questions:
Did you use any Client tool to make changes to your HSQL db?
If yes, did you close the connection to HSQL database before starting Liferay?
If not, Liferay won't be able to acquire lock on your db and fail to start.
If not, did you make DB changes directly in the HSQL db file?
This is NOT Recommended. Rollback your changes and try to use HSQL client to make your db changes
HTH!
P.S. Is this issue duplicate of: https://stackoverflow.com/questions/21052236/weblogic-wont-start. If so, please delete that one.
Related
As per document (https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/composing_a_customized_rhel_system_image/managing-repositories_composing-a-customized-rhel-system-image) tried to override the system repository with custom base url . But blueprint depsolve is showing error as below
##composer-cli blueprints depsolve Test1-blueprint
2022-06-09 08:06:58,841: Test1-blueprint: This system does not have any valid subscriptions. Subscribe it before specifying rhsm: true in sources.
And with next service restart osbuild-composer does not start
ERROR: Info Error: Get "http://localhost/api/v1/projects/source/info/appstream": dial unix /run/weldr/api.socket: connect: connection refused
Am I missing something here ?
Having all manner of issues with this myself. A trawl of my /var/log/messages file, and it looks like, for me at least, osbuild-composer is failing to start due to the non existence of /etc/osbuild-composer/osbuild-composer.toml. Actual error is permission denied, but it doesnt exist..
This is on RHEL 8.5, and just updated to 8.6 this morning, and same problem
/edit Ive removed everything, and reverted to using the lorax backend, as per chapter 2.2 in the doc linked (same one I was following). My 'composer-cli compose types' command now at least works. Fingers crossed..
We are planning to migrate from HMC to Backoffice. I want all the data in HMC to be transferred to Backoffice. when I ran SOLR Full indexing but I am below getting Issue.
ERROR [Thread-6784] (00011GS2) [BackofficeIndexerStrategy] Executing indexer worker as an admin user failed:
ERROR [Thread-645] (00011HJD) [BackofficeIndexerStrategy] Executing indexer worker as an admin user failed:
INFO | jvm 1 | main | 2021/01/13 09:54:59.310 | de.hybris.platform.solrfacetsearch.indexer.exceptions.IndexerException: de.hybris.platform.solrfacetsearch.solr.exceptions.SolrServiceException: Could not check for a remote solr core [master_backoffice_backoffice_product_flip] due to Server refused connection at: http://Dev:8983/solr
Every time I get error. I had to restart my solr server.
Can someone please help me on this.
Thanks
This might be an issue about SSL checks. You can try using disabling the SSL by adding
solrserver.instances.default.ssl.enabled=false
to your local.properties file. This should best only be used on your local development environment.
Or, you can try using https in your default solr server like https://localhost:8983/solr
I am using sap hybris 2205, products displayes in front but, solrserver not working. i try this one also (backoffice.facet-facet conf.->siteindex->enter->finish) getting error.
In Liferay, I see the following error:
Problem: Absolutely nothing appear in the Liferay log
Question: How to investigate further?
For some obscure reason, Liferay has chosen to make some exceptions (even very serious ones) show up with the DEBUG log level.
So, the solution is to set the log level to DEBUG (The com.liferay package should be enough for most situations):
Don't forget to press "Save". Once set, try again (no restart needed), and you will probably see the exception appear, so that you can know exactly what is going wrong.
Example:
09:34:29,903 DEBUG [http-nio-8080-exec-5][LiferayPortlet:587] com.liferay.asset.kernel.exception.NoSuchEntryException: No AssetEntry exists with the key {classNameId=20015, classPK=36354}
com.liferay.asset.kernel.exception.NoSuchEntryException: No AssetEntry exists with the key {classNameId=20015, classPK=36354}
at com.liferay.portlet.asset.service.persistence.impl.AssetEntryPersistenceImpl.findByC_C(AssetEntryPersistenceImpl.java:3551)
This log level modification will be reset next time you restart Liferay.
To get more information on an unexpected "Your request failed to complete" error, you can activate the DEBUG traces for the category:
com.liferay.portal.kernel.servlet.SessionErrors
on the Control Panel => System => Server Administration => Log levels section.
After activating these DEBUG traces, every time an error is stored in the SessionErrors object, it will be written to the log file.
These traces were added by the LPS-135569 issue that was fixed as of DXP 7.4, so if you are using an older version, you will have to update or patch your installation.
I upgraded OpsCenter from 5.1.3 to 5.2.0 (and then to 5.2.1). I had a scheduled backup to local server and an S3 location configured before the upgrade, which worked fine with OpsCenter 5.1.3. I made to no changes to the scheduled backup during or after the upgrade.
The day after the upgrade, the S3 backup failed. In opscenterd.log, I see these errors:
2015-09-28 17:00:00+0000 [local] INFO: Instructing agents to start backups at Mon, 28 Sep 2015 17:00:00 +0000
2015-09-28 17:00:00+0000 [local] INFO: Scheduled job 458459d6-d038-41b4-9094-7d450e4bac6f finished
2015-09-28 17:00:00+0000 [local] INFO: Snapshots started on all nodes
2015-09-28 17:00:08+0000 [] WARN: Marking request d960ad7b-2ccd-40a4-be7e-8351ac038c53 as failed: {'sstables': {u'solr_admin': {u'solr_resources': {'total_size': 155313, 'total_files': 12, 'done_files': 0, 'errors': [u'{:type :opsagent.backups.destinations/destination-not-found, :message "Destination missing: 62f5a26abce7463bad9deb7380979c4a"}', u'{:type :opsagent.backups.destinations/destination-not-found, :message "Destination missing: 62f5a26abce7463bad9deb7380979c4a"}', u'{:type :opsagent.backups.destinations/destination-not-found, :message "Destination missing: 62f5a26abce7463bad9deb7380979c4a"}', shortened for brevity.
The S3 location no longer appears in OpsCenter when I edit the scheduled backup job. When I try to re-add the S3 location, using the same bucket and credentials as before, I get the following error:
Location validation error: Call to /local/backups/destination_validate timed out.
Also, I don't know if this is related, but for completeness, I see some of these errors in the opscenterd.log as well:
WARN: No http agent exists for definition file update. This is likely due to SSL import failure.
I get this behavior with either DataStax Enterprise 4.5.1 or 4.7.3.
I have been having the exact same problem since updating to OpsCenter 5.2.x and just was able to get it working properly.
I removed all the settings suggested in the previous answer and then created new buckets in us-west-1, us-west-2 and us-standard. After this I was able to successfully able to add all of those as destinations quickly and easily.
It appears to me that the problem is that OpsCenter may be trying to list the objects in the bucket that you configure initially, which in my case for the 2 existing ones we were using had 11TB and 19GB of data in them respectively.
This could explain why increasing the timeout for some worked and not others.
Hope this helps.
Try adding the remote_backup_region property to the cluster configuration file under the [agents] heading in "cluster-name".conf. Valid values are: us-standard, us-west-1, us-west-2, eu-west-1, ap-northeast-1, ap-southeast-1
Does that help?
The problem was resolved by a combination of 2 things.
Delete the entire contents of the existing S3 bucket (or create a new bucket as previously suggested by #kaveh-nowroozi).
Edit /etc/datastax-agent/datastax-agent-env.sh and increase the heap size to 512M as suggested by a DataStax engineer. The default was set at 128M and I kept doubling it until backups became successful.
I'm running into a problem with getting SSL to work in the Development Fabric. I'm running a clean install of Windows 8 Pro with Visual Studio 2012 Ultimate and the October 2012 Azure SDK for .NET. IIS8 is not installed, only IIS Express, which claims to support HTTPS so I'm hoping that's not the issue.
Running VS 12 as administrator, I've created a blank VS solution, added a new (.NET 4.5) cloud service with a new ASP.NET MVC 4 Internet web application project, and hit F5. Everything works fine. Then, when I add an SSL certificate to the web role and replace the HTTP endpoint (port 80) with an HTTPS endpoint (port 443, with the certificate), hitting F5 produces the following error message:
Windows Azure Tools for Microsoft Visual Studio
There was an error attaching the debugger to the role instance 'deployment18(32).WindowsAzureCloudService.Mvc4WebRole_IN_0' with Process Id: 4892'. Unable to attach. Access is denied.
Note, the last part ("Access is denied") comes in a few variations, a particularly pleasant one being "Catastrophic failure". :)
The only message in the VS Output window ('General' output) is:
Windows Azure Tools: Warning: Remapping private port 443 to 444 in role 'Mvc4WebRole' to avoid conflict during emulation.
The Compute Emulator UI is not much help; just before the instance disappears, this is the only console output that I get consistently (sometimes other messages appear, but sporadically every few runs; I'm not sure how to capture these):
[fabric] Role Instance: deployment18(33).WindowsAzureCloudService.Mvc4WebRole.0
[fabric] Role state Unknown
[fabric] Role state Suspended
[fabric] Role state Busy
[fabric] Role state Unhealthy
[fabric] Role state Stopped
The certificate was obtained from a CA and properly imported into the Local Machine/Personal/Certificates store as a .pfx with private key, extended properties, and marked as exportable, for what it's worth.
When I attempt to publish the service to Azure, I get one build (validation) warning about the database connection string (which I assume is irrelevant):
The connection string 'DefaultConnection' is using a local database '(LocalDb)\v11.0' in project 'Mvc4WebRole'. This connection string will not work when you run this application in Windows Azure. To access a different database, you should update the connection string in the web.config file.
Probably more important, the deployment actually fails with the following history in the Windows Azure Activity Log window:
9:00:25 AM - Warning: There are package validation warnings.
9:00:25 AM - Preparing deployment for WindowsAzureCloudService - 1/3/2013 8:59:55 AM with Subscription ID '<...>' using Service Management URL 'https://management.core.windows.net/'...
9:00:25 AM - Connecting...
9:00:26 AM - Object reference not set to an instance of an object.
9:00:26 AM - Deployment failed with a fatal error
Can someone help me troubleshoot this issue? I've rebooted a few times. ;)
Thanks in advance!
EDIT (Jan. 3, 4:44 PM): I have a few ideas that might help me make progress, but some are pretty drastic so any advice would be appreciated:
Is there a way to capture all the output from the Compute Emulator (Dev Fabric) to a log file so I can review it? (System.Diagnostic.Trace calls from my service won't help, since I don't even get as far as the RoleEntryPoint when using HTTPS!) I figured this out; see next edit.
That null pointer exception during the Azure deployment has me worried. Is it worthwhile to try reinstalling the Azure SDK, and if so, how should I go about doing a clean install of it?
Has anyone seen a problem of this sort disappear when switching to using full IIS for the emulator? (That seems unlikely since IIS vs. IIS Express should have no relevance to the Azure deployment.)
EDIT (Jan. 4, 10:15 AM): Bad news: I tried the suggestion to grant Read access to the certificates, but it didn't help in my case. Good news: I managed to capture one of those sporadic messages in the Compute Emulator UI before it shut down; it was a bit of info from some diagnostics. Not helpful in and of itself, but it revealed where the Development Fabric was storing its temporary files:
[Diagnostics] Information: C:\Users\Lars\AppData\Local\dftmp\Resources\0005155d-4592-40f4-812e-18793b26576c\directory\DiagnosticStore\Monitor
The GUID portion gets recreated for every deployment, and it is deleted when the deployment goes away (as it always does in my case). But in the parent directory ('dftmp'), there are a few helpful directories that I then monitored during a new deployment: DevFCLogs, DFAgentLogs, and IISConfiguratorLogs. I guess that answers the first question I had yesterday! :)
DFAgentLogs\DFAgent.log: (41KB) No useful information. A bunch of "Failure to read pipe" messages and failures to get the role/deployment instance ID, which I assume are just noise.
DevFCLogs\DevFabric--2013.01.04--<...>.log: (510 KB) No useful information. I skimmed the file and also searched for 'error', 'failure', 'not found', 'certificate', and 'Mvc4WebRole_IN_0'; none of those showed any hints of what was going on.
IISConfiguratorLogs\IISConfigurator.log: (6 KB) Now we're making progress!! :) Can someone tell me what this means? (In the meantime, I'm off ILSpy-hunting... fun fun...)
IISConfigurator Information: 0 : [00006356:00000005, 2013/01/04 16:07:08.915] Using IIS Express appdomain
(...)
IISConfigurator Information: 0 : [00006356:00000005, 2013/01/04 16:07:08.936] Adding binding 127.255.0.0:444: to site deployment18(40).WindowsAzureCloudService.Mvc4WebRole_IN_0_Web
IISConfigurator Information: 0 : [00006356:00000005, 2013/01/04 16:07:10.484] Caught exception
IISConfigurator Information: 0 : [00006356:00000005, 2013/01/04 16:07:10.487] Exception:System.Runtime.InteropServices.COMException (0x800401F3): Invalid class string (Exception from HRESULT: 0x800401F3 (CO_E_CLASSSTRING))
Server stack trace:
at Microsoft.Web.Administration.Interop.IAppHostProperty.get_Value()
at Microsoft.Web.Administration.ConfigurationElement.GetPropertyValue(IAppHostProperty property)
at Microsoft.Web.Administration.Binding.get_CertificateHash()
at Microsoft.Web.Administration.BindingCollection.Add(Binding binding)
at Microsoft.WindowsAzure.ServiceRuntime.IISConfigurator.WasManager.DeploySite(String roleId, WASite roleSite, String appPoolName, String sitePath, String iisLogsRootFolder, String failedRequestLogsRootFolder, List1 bindings, List1 protocols, FileManager fileManager, WAAppPool defaultAppPoolSettings, String roleGuid, String& appPoolSid, List`1 appPoolsAdded, String configPath)
EDIT (Jan. 4, 11 AM): ILSpy wasn't much help; the exception is being thrown at an interop point (we knew that already) while trying to get the hash of a certificate in order to set up the binding (we knew that too). Does anyone know what COM object would need to be registered in order to get a certificate hash for a binding in Microsoft.Web.Administration? Or how I could intercept the interop call to find out? Bonus points if you can tell me why this is happening in the first place. :)
I've had similar problem on two computers. On both cases installing IIS solved the problem.
It seems to be enough to just install the IIS (via add/remove Windows components). You don't need to start using it. The installation changes something and after that my IIS Express started working again with HTTPS from Visual Studio.
There is a discussion on similar issue on MSDN Social:
http://social.msdn.microsoft.com/Forums/nl-NL/windowsazuredevelopment/thread/ad362016-16f6-459a-8022-9307aa5f910e
And the issue has been also raised on Microsoft connect:
https://connect.microsoft.com/VisualStudio/feedback/details/758533
In my case the error in the log files was:
IISConfigurator Information: 0 : [00007644:00000007, 2013.01.17
00:39:18.523] Exception:System.Runtime.InteropServices.COMException
(0x800401F3): Invalid class string (Exception from HRESULT: 0x800401F3
(CO_E_CLASSSTRING))
I found the log files from C:\Users\\AppData\Local\dftmp\IISConfiguratorLogs directory.
When running locally with a private key cert for SSL, you'll need to give the user the emulator app is running under access to the private key. Open mmc.exe and add the Certificates >> Local Computer Snap-In to view your certificate. Right Click on the certificate, then All Tasks >> Manage Private Keys - then add IUSR and Network Service with at least read access.
For deployment to azure, you'll need to upload the certificate to the Cloud Service and make sure the certificate is valid for the domain.
Follow step 11 from http://www.microsoft.com/en-us/download/details.aspx?id=35448. From this SO post