Liferay startup events explanation - liferay

1) I was trying to create a portal startup hook, and was overwriting a startup action. The wording in application.startup.events description was a bit vague: it says that this event runs once for every web site instance of the portal that initializes. Does 'web site instance of the portal' mean the same as 'portal' instance?
2) Whenever I redeploy my hook, my application startup event action gets called. Does it mean that the portal instance reinitializes? If so, why don't I observe the same behavior if I redeploy other plugins? (When I redeploy other plugins, startup event action doesn't get called)
3) When I try to overwrite global.startup.events instead of application.startup.events in my hook, my startup action never gets called (I inserted some print statements into the startup method and restarted the server). How can this behavior be explained?
I'd appreciate if you answer even partly, since it would still benefit me and probably the community.
Thanks in advance

A hook is deployed as a web application. Thus an application.startup.events-configured action will fire when the hook gets deployed. AFAIK it will be called with all available instance ids (technically companyId). It seems that the wording in the documentation is unfortunate. However, as all webapplications deploy independent of each other, this is the best effort that's available. And if you update your hook's code and redeploy it, you might want to run the changed startup event.
global.startup.events can not be configured in a hook, thus you see no activity - it's strictly ignored.

Related

Deploying an Azure Function from VS Code - Succesfull but not visible in the Portal

I created a function and I am trying to deploy it from VS Code by clicking the Deploy to Function App.... The Deployment runs successfully based on the output log - Deployment successful but then when I go to the portal, the function is not listed under Functions.
What shall I do and what is the problem here?
When I debug in VS Code, I get this: No job functions found. Try making your job classes and methods public. If you're using binding extensions (e.g. Azure Storage, ServiceBus, Timers, etc.) make sure you've called the registration method for the extension(s) in your startup code (e.g. builder.AddAzureStorage(), builder.AddServiceBus(), builder.AddTimers(), etc.).
Unfortunatly I would not know if those steps don't work for uploading. The deployment finishes, and every single time it becomes visible in my portal. Uh, maybe there is a slight difference. The app service itself is pre-created via terraform. Just the uploading of the code I do via VSC.
As far as deletion goes:
Open the resource group, in the list lookup the App Service. Select the checkbox in front of it. Delete in the top nav bar of that pane.
Trying to delete it any other way will indeed give you the "Not found" error.
I've had the same 'issue', in my case it turns out that the issue was a bad entry in the requirements.txt
I had an incorrect line with 'io' and when it was present despite the deployment appearing to complete successfully in VS code, the function was not updated if it was previously deployed or not deployed if it wasn't resulting in the same 'no results' in the functions list.
Having other requirements such as 'numpy' or 'scipy' worked just fine.
It's an old thread but maybe it'll be helpful to whoever gets here in the future.
Even as of now, some changes I make in VS Code seem to take time to be immediately visible on the portal. I had a similar issue with resources, i.e. creating a resource from VS Code wouldn't make it immediately visible on Azure Portal. You can always go to Functions on the portal and click Refresh. Also try going to Advanced Tools, then Kudu and checking if your function can be found there.
One word of advice: if you publish your functions from VS Code, then work on that resource only from VS Code. You will find it reiterated all over Azure Functions docs that:
Publishing to an existing function app overwrites the content of that
app in Azure.

Scheduler not firing

I have a development instance on my laptop and for some reason I cannot get any Automation schedules to fire. If I create a schedule, it looks ok and shows the appropriate next execution time, but it never changes. What am I missing to enable the scheduler?
If you restored a snapshot on your site, you will need to go to screen Automation Schedules screen (SM205030) and click on Initialize Scheduler action.
This action exists in order to prevent schedules to be run directly after restoring a backup of an environment.
This was done to prevent necessary actions from happening on a test environement, e.g. spamming customers email, uploading important files to file provider, etc.
This is why this button need to be manually clicked after restoring a backup.
With the help of support, I was able to resolve this. We deleted the application from the configuration manager and recreated it. The scheduler then started working. There was nothing obviously wrong, but creating the new instance worked.

Can a WebRole be "recycled" programmatically?

IIS7 application pools can be recycled programmatically. Is there an equivalent concept for the web role in Azure?
That is the basic question, but for background on why I ask, I include the following...
We are attempting to get Umbraco installed in Azure, and the Umbraco installation wizard writes it's configuration information and then manually restarts the application pool (in IIS) to reread the configuration it just wrote. It needs to work the same way in Azure, but at this point we are not able to get it to reinitialize itself from scratch (as it does in IIS7).
You can call RoleEnvironment.RequestRecycle() for a given role instance. This effectively has the Windows Server VM restart, which re-executes your startup scripts, OnStart() method, and Run() method. When doing this, you may want to consider some type of breadcrumb to leave yourself: If you find something installed upon restart, just skip the install process; otherwise, install and request a recycle.

deploying to sharepoint using the object model doesn't work reliably

Deploying to sharepoint using the object model or STSADM commands sometimes results in one or more packages being in the "error" state in the web control, a redeploy instantly fixes this, usually, even stranger, if i create two apps one which adds and one which deploys then i get no problems, but putting a delay between a single program does not have a similar effect.
If i run the deploy twice for programs which did not deploy successfully it works fine, as long as I do not try to do it programatically in which case it makes no difference.
It is different files and sometimes is none.
I do use stsadm -execadmsvcjobs between add and deploy and even between two of the deploy bunches.
(i'm deploying around 10 wsp files programatically)
Does anyone have any ideas on why this happens? or how to solve it, as when i get to implementations it causes problems.
The problem lies in the fact that sharepoint will perform app pool recycles and / or full iisresets, as well restarts of the SharePoint Timer Service (altough not completely sure about that though). When you try to actually deploy the just installed package sharepoint is still busy getting up and running again, the timer job created to install / deploy is basically waiting for the central admin app pool to be fully running again.
The same thing happens (somewhat reproducably) while retracting a solution. Hit F5 a lot of times on the solution management page while the retract process is underway and if you refreshed fast enough it will hang and display "error" in red.
My solution was to create a WebRequest to at least the central admin (or just do a SPSite = new SPSite("centraladminurl")) in your deployment app or in powershell. Do this after every deploy action as well.
This SHOULD fix the timing issue (basically a kind of "race condition").

RoleManager.WriteToLog in Azure Development Fabric

I'm just taking my first steps with Azure and the first thing I see in the development fabric is a bunch of console style logging windows. So I figure that's going to be handy and decide to figure out how to write stuff there and stumble upon this:
Microsoft.ServiceHosting.ServiceRuntime.RoleManager.WriteToLog("Information", "Message);
Cool, except I don't see anything in the log. Saw a post somewhere that after first install you need to reboot before this will work, so I tried that but still nothing.
I've checked with a break point that the code is executing and I've checked the logging level in the dev fabric.
Is this supposed to work, or am I completely off base?
Just to add some more information about what I'm doing:
Started with VS08's new project wizard for Cloud Service
In the wizard added a single ASP.NET Web Role project
In the Page_Load method in Default.aspx.cs I added the WriteToLog line shown above
Ran the project and in the dev fabric UI, drilled down the tree to web role instance "0"
Nothing in the displayed log except some role instance start up messages followed by a bunch of health status messages.
Tried reloading the page a few times, the breakpoint in Page_Load gets hit but nothing in the log.
Logging Level is set to information, and the other events are level information so I don't think it's a logging level issue.
Logging level is the common gotcha, but there's also a delay before logs start showing up in the dev fabric, so make sure that this message is getting logged at least 10 seconds after the application starts.
After some more googling, seems like this is a known issue:
https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=481184

Resources