My newly created report is not running even after i scheduled it. I couldn't even find it in schedule management--> view future activities, even after i schedule it many times in the future time. Please help me where I went wrong.
Thanks.
There is likely an issue with the JobService.
First, set the logging to full:
From Cognos Connection > Launch > IBM Cognos Administration
On the Left Pane > Status > System
Drill down on the server until you see the services
JobService > Down Arrow > Set properties
Settings tab
Audit logging level for job service > Full
Review the log file:
Schedule the report
Remote log on to the server
Go to INSTALL_PATH\cognos\c10_64\logs (this is my Windows path)
Open cogserver.log
Find the relevant line. This should give you insight as to why it is failing.
Related
My goal is to collect some custom logs in Azure Monitor from an external VM running on Linux. In that regard, I've installed the log analytics agent according to the MS official documentation, I ran the wizard in order to setup a custom log - that includes a sample file, a row delimiter and a location from where to collect the logs. However, I'm getting a warning message saying:
Two successive configuration applications from OMS Settings failed – please report issue to github.com/Microsoft/PowerShell-DSC-for-Linux/issues (1)
Tried to follow the link proposed that points to Github where I wasn't able to find any solution (nor on any other link) on this and that's why I said to give it a change and ask the community in here.
Though, it is weird that the heartbeat of the machine or manual syslogs messages are being collected except for the custom logs.
Has anyone encountered this and managed to overpass it? Thanks
Apparently, according to the MS answer, the above warning message is normal to be displayed. However, the reason for not collecting the logs was that in the target file that has to be processed by the oms agent, you need to keep appending new entries because this triggers the oms agent which compare and check if the file has new entries than at the last check.
Hope will help someone!
I have a client that is trying to run the Validate Customer Balances process in accounts receivable and the process does not finish. What tools are available to debug this?
First of all you need to determine whether the process finishes with an error or is simply taking a long time.
This process has a specific warning stating that balance validation takes a long time:
If you see the process progress indicator spinning, it's most likely just taking a long time. In that case you should either wait until the process finishes or follow the warning to select fewer customer.
If the processes finishes with an error indicator:
Open the trace window and run the process again, error details are usually visible in traces. For example, if you see financial period is inactive in traces then you can resolve it by going to the financial period screen and activating it:
Acumatica's T190 Quick Start In Customization and T270 Workflow API documentations describe the Debugging Process while working with Acumatica.
When you install a new Acumatica version, You must put the Install Debugger Tool checkbox.
You must open Visual Studio project as Admin.
To be able to debug the site's original source code, find web.config file of the Acumatica instance. Setting of Vistal Studio also must be changed as described in documentation.
Change the "False" value to "True".
Go to Debug->Attach to Process... or press Ctrl+Alt+P. Search for w3wp.exe, select the instance you needed and Press Attach.
Then put breakpoints and do debugging as you would do in any other project not related to Acumatica.
I have developed a custom timer job for SharePoint 2013 in visual studio 2012 which sends email notifications. The issue is that it works fine on development server.
I have followed the following steps to debug it on the development server 1.) Deploy the timer job on respective site. 2.) Restart the timer service in services.msc 3.) Then is do attach to process OWSTIMER in visual studio. 4.) And finally Go to SharePoint 2013 Central administration->Monitoring->Review Job Definition and click on the respective timer job and say run now.
After doing this the breakpoint is hit in visual studio at the Execute() method. So in the development server it is running.
Now on the production server I cannot debug using visual studio so I have deployed the packaged solution(.wsp).
I can see the feature is activated in Site Collection Administration-> Site Collection Features.
Now on the production server I follow the following steps 1.)Restart the timer service in services.msc 2.)And finally Go to SharePoint 2013 Central administration->Monitoring->Review Job
Further to test whether the timer job is working on production server or not I had used PortalLog.LogString("Flow test1"); at the start of the Execute() method. Now this runs on the development server and I see the message in the SharePoint logs but on the production server I can't see "Flow Test1" in the logs after I click Run Now in central admin.
Can anyone suggest what is the issue and a possible solution?
It seems to me that there are two issues:
You should use other way for logging LoggingService should be preferred way. Use WriteEvent to write to EventLog or WriteTrace to write to ULS log.
Running job. Be sure that Owstimer.exe service on all web servers are restarted (can be done by this powershell script). I expect that you have correctly scheduled your job either in your powershell script or in your feature receiver.
Here are a few things to try:
Go to Central Administration and run the timer job from there. Then go to the job history page and check whether it finished successfully or not. If there was an error, you should see the error message from there. That will give you a clue on whats happening.
As Mazin said, restart the timer service in all servers. After deployment, the DLLs are cached by the process and you don't see your changes reflected.
Browse the SharePoint logs and search for an exception or error. You can narrow your search by selecting the timeframe on which your job ran. You can use the following PS script:
Get-SPLogEvent -StartTime "02/02/2014 11:00" -EndTime "02/02/2014 13:00" | Out-GridView
As stated here it seems your job assembly is not deployed in the GAC. Verify that the assembly is present there.
I am getting the following error:
Error: The process cannot access the file 'C:\DWASFiles\Sites\mywebsitename\VirtualDirectory0\site\wwwroot\newrelic\NewRelic.Agent.Core.dll' because it is being used by another process.
In the Running deployment command... log file when attempting to deploy an Azure website from Github.
Would appreciate any pointers as to what could be causing this.
UPDATE: Turns out this is also failing when publishing directly from VS.NET with the following:
1>C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v11.0\Web\Microsoft.Web.Publishing.targets(4196,5): Warning : An error was encountered when processing operation 'Create File' on 'NewRelic.Agent.Core.dll'.
1>Retrying operation 'Update' on object filePath (mywebsitename\newrelic\NewRelic.Agent.Core.dll). Attempt 1 of 2.
1>C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v11.0\Web\Microsoft.Web.Publishing.targets(4196,5): Error : Web deployment task failed. ((06/07/2013 23:54:58) An error occurred when the request was processed on the remote computer.)
This was working before and I am not sure why it would have stopped.
NewRelic recommend stopping the website to unload the file and allow the deployment to go through.
As an alternative, you can set COR_ENABLE_PROFILING to 0 in your app settings on the configure tab to temporarily disable the profiling, which should then allow you to continue with the deployment while leaving the website operational throughout.
Instead of stopping the website you can temporarily turn off New Relic monitoring via the Configure tab on manage.windowsazure.com:
Configure > developer analytics > select "OFF" > Save
Deploy
Configure > developer analytics > select "ADD-ON" > Choose Add-on from dropdown > Save
Worked for me, both with a regular deployment from VS and an automatic build from VSO.
This is a known issue with the New Relic .NET agent for Azure Websites when performing an upgrade of the agent. The workaround is to stop the website to release the dll, finish the deployment and then restart the instance.
https://newrelic.com/docs/dotnet/azure-web-sites#h2-1
Not really a solution but more of a work-around, in the publish dialog view a preview of the changes and uncheck the NewRelic.Agent.Core.dll file so that it doesn't get published.
None of these answers work for me anymore. I have an Azure Basic tier website plan, which hosts multiple actual websites.
If I don't stop the website, I get the error mentioned above (newrelic.agent.core.dll is in use)...
If I do stop the website (or all of them), I get an error saying that the publishing endpoint isn't available.
If I go to the configure tab and disable the AddOn, we still get the error mentioned above (newrelic.agent.core.dll is in use)...
Pretty much we just republish over and over again with different permutations of the above until if works. It took me hours the other day, took me 10 minutes today.
If you are using webdeploy, then you can configure your webdeploy settings so that it ignores the file. However, if you do that, you will manually have to deploy any updates to the new relic agent.
I had a similar issue with the new relic log file being locked, and solved it by:
Moving the new relic log file to a subdirectory of the web root (e.g. \newreliclogs)
Adding 2 lines to my powershell script that configured the skip directive to ignore that whole directory. e.g. (where destBaseOptions is of type Microsoft.Web.Deployment.DeploymentBaseOptions
$skipDirective = new-object Microsoft.Web.Deployment.DeploymentSkipDirective("NewRelicLog","objectName=dirPath,absolutePath=.*\newreliclogs$")
$destBaseOptions.SkipDirectives.Add($skipDirective)
Depending on how you are using webdeploy, the configuration is achieved slightly differently, I used the following links to help me piece it together:
https://technet.microsoft.com/en-us/library/dd569089(WS.10).aspx
https://msdn.microsoft.com/en-us/library/microsoft.web.deployment.deploymentskipdirective(v=vs.90).aspx
https://msdn.microsoft.com/en-us/library/dd543313(v=vs.90).aspx
http://blogs.iis.net/jamescoo/archive/2009/11/03/msdeploy-api-scenarios.aspx
http://forums.iis.net/p/1192163/2031814.aspx#2031813
And I used the powershell script from the Octopus Deploy Library at https://library.octopusdeploy.com/#!/step-template/actiontemplate-web-deploy-publish-website-(msdeploy).
I have a task that calls the SSRS report server to render the report, then places the freshly rendered report into a SharePoint document library. The report server is set up in SharePoint Integrated Mode.
For smaller reports, everything works perfectly. However, if a report take longer than about 90 seconds to generate, the call to render throws an exception of "The operation has timed out".
My proxy's timeout value is set to -1.
(RSExecClient.Timeout = System.Threading.Timeout.Infinite)
The httpRuntime's exeuction timeout is set to "9000" for both the SharePoint web site's, well as the report server's, web.config files.
Also, I have set the DatabaseQueryTimeout to "0" in the rsreportserver.config file.
I am connecting to the ssrs web service using "https:///_vti_bin/ReportServer/ReportExecution2005.asmx".
I also noticed that I could connect through "http://:8080/ReportServer/ReportExecution2005.asmx", however it seems I have to handle authentication and authorization myself, which caused other troubles.
Does anyone know what I am doing wrong, or at least of a logging location that I can go to for more information?
Thanks,
Michael
try checking the timeout property on the database connection itself (DataSource).
Keep in mind there are .config fies where the httpruntime is defined one for the Report
Server and another for the Report Manager. I got bitten with this a few days ago.
SSRS logs files and locations are documented here:
http://msdn.microsoft.com/en-us/library/ms157403.aspx
There are some senarios that you face reprting service timing out because of the memory issue.
The issue is reporting services release memory and it will take time to allocate it back. There are couple of setting which you can find in the following link.
http://msdn.microsoft.com/en-us/library/ms159206.aspx
try to set WorkingSetMinimum to higher number and you will see and imporovment when you restart reporting services next time but be careful not to overuse your memory for reporting service