DNN 9.01.01 Export/Import Stuck on Submitted Status - azure

We are transitioning to DNN 9.01.01 build, but it seems that the import/export feature is not working properly. I submitted an import but it has been sitting on submitted status for the last 8 hours.
Is this a known issue or is there configuration on the server that preventing the import/export to work?
Our instance is installed on Azure
Thanks

This thread hasn't been touched in a long time but I dug around and found the problem. I fixed this problem by directly editing the DNN database. I'm on version 9.4 although, I'm sure this would work with any version as this issue is apparently caused some wonky code in the Azure AppService deployment packages.
To resolve, I just had to manually edit the dbo.Schedule table. I use Azure Data Studio because I'm on a Mac but SSMS or any other manager will work as well. I'm sure you can even use the DNN built-in editor although I'm not very familiar with it.
While digging through the dependencies I noticed that unlike the non-operational Export/Import job, all the working jobs had a NULL value in the "Server" field whereas the Export/Import job had the Azure server name written to it. I manually changed the value of this field to NULL and the Site Import job that had been perpetually spinning, started immediately.
Also, for posterity, you will want to make sure you don't have 15 different import jobs queued up before you do this because they will ALL begin processing once you commit the new value to the DB. If it took you a few times to figure out they were spinning you will probably want to go to the scheduler and delete anything you don't want to run prior to the DB edit.
Hope this helps save someone else some time. Cheers!

We contacted support as well and looks like it was an issue with installing DNN as Azure webapps.
We had to delete all the unused server and set the task to run on the current active server and start the import/export feature manually on the scheduler tab.

I had this issue, when checking the other task scheduled for execution I noticed the server field was empty while on the import/export there were comma separated inputs. When I cleared the import/export field the task ran correctly.

I believe they left out coded for this I'm DNN 9. I tried using it for a customer and it was useless.
I inquired and got a response that said it was an oversight.

To add to the possible issues that can cause this, We had renamed our server and the scheduled task still had the original name of the server. Once we changed the name to the new one under the task, it started running as scheduled.
HTH
Dave

Related

Azure Recovery services scheduled tasks keep going disabled

I have installed Azure Recovery Services (MARS) onto a 2019 server. I can fully configure it using the GUI, but the scheduled backups just don't run.
I can run the back manually and it runs perfectly and completes quickly; however, when I try to use the scheduler, it doesn't run.
I have checked the Task Scheduler and the job keeps switching to disabled with the notification:
User "System" disabled Task Scheduler task "\Microsoft\OnlineBackup\Microsoft-OnlineBackup"
When I installed the application, I changed the default path to C:\Domain Services to keep them separate, is this where it went wrong?
I have other servers on the backup platform which are not having any issues at all, I have also tried the steps in:
https://learn.microsoft.com/en-us/azure/backup/backup-azure-mars-troubleshoot#backups-dont-run-according-to-schedule
And also
https://dirteam.com/bas/2019/01/09/the-mysterious-case-of-azure-backup-agent-not-running-its-schedule/
But it is not fixing the issue.
I am completely out of ideas, hoping that somebody can help me!
Change the settings in the task scheduler for Online Backups. See the snippet below.
I have no idea how, but the system is now working correctly and not being disabled. I tried to remove all the MARS software on the machine and re-installed it and it now works correctly and has been backing up for a few weeks now.
Thank you for all your assistance.

Azure-Deployment to stage ignores service configuration

I created a cloud service and tested it successfully locally. I added service configurations for stage and production. Here is a snippet of my staging-configuration:
and here my configuration-settings:
Then when I publish I set up the deployment as follows:
All this worked like 2 weeks ago. But now he deploys in VS and when I look into Azure Service Configure area it looks like this:
I played a little bit with the "Update development ..."-checkbox on the second screen but the result is the same.
So it ignores all the settings I made and just won't tranistion my configuration to the ine I named "CloudStage". My current Web PI tells me that I use Windows Azure SDK for .NET (VS 2013) 2.3. I don't get the point.
Edit
Some more things I observed:
No WADLogsTable and WADWindowsEventLogsTable is generated automatically in the staging storage.
I deactivated Remote Desktop because it was one of the changes I made to monitor the event log (which wasn't useful here)
I manually changed the connection strings in Azure Portal but it seems as if the worker is totally unaware of the storage (rebooted it with no success).
Edit
I recognized another thing. Here you can see a running deployment of my service:
See the warning-mark on the left? If I go to my Error list this is shown:
This warning is senseless since it tells me that I did everything the right way. My *.Local.csfg-files are pointing to the local storage. So?!?
This seems weird. Please check the in your ServiceConfiguration.CloudStage.cscfg to verify the expected values.
Have you tried updating any other property like Enabling Remote desktop? Does that get updated on your deployment? You should select the "Deployment Update" check box in the publish dialog. Now, when deploying to an existing Cloud Service, it should ask you if you want to replace it.
If you get the Object reference error every time you right click on project, there might be some issue with the Azure SDK set up.
I'm a little bit further now. What I did was:
Deleted all Services in Azure.
Deleted all Storage Accounts in Azure
Removed my Service-Project completely from solution (not the library containing the worker-logic).
Re-added storage-accounts in Azure.
Re-added services in Azure.
Re-added a project in the solution and added the worker-logic inside it.
Builded up all the publishing-stuff again.
Published it.
The first publish ended like the one described in my question. After I checked the "Update development..."-option in properties of my worker it finally took my transitions into the stage!
Now I recognized, that WADLogsTable was still empty. I hit the instance right in server-explorer and choosse "Update diagnostic settings...". There was an option "Transfer period" suddenly set to "None". This explained to me, why my table was empty and after I set it back to "1" my table is filling again!
Another funny thing beside: When I right-click my Cloud-project in the solution I get "Object reference not set to an instance...". When I just click it left and choose Build->Publish it works.
I just hope that I can help somebody with this. Lets see if it's stable now.
Edit: Yesterday it worked - today is still the same issue :-(.
When you get "Object reference not set to an instance.." for a CloudService project you usually have some kind of mismatch. It could be that a setting in the ServiceConfiguration is not defined in the ServiceDefinition. It could also be that there is a publish profile defined in the .ccproj file for the CloudService that doesn't exist. This might also be what is causing your problems with the different configurations.
So it turns out that the problem is completely on client-side. My Visual Studio (now with SDK 2.4) is doing something wrong. I set up a fresh installation with all the stuff needed :-( and there it works perfect. I'll try to determine if one of my extensions is causing the strange "Object reference not set..."-bug.
Repair-Installation of VS does not solve the problem btw.

Export Database on Azure Keeps Locking

I've been using the export database feature on Azure for the past year without problems.
However during the past week the feature seems to error more than it actually works. On some occasions the export takes an hour or more (where it used to be a minute or less) and on other occasions it errors without completing the job. On the Info Screen in Azure it says "Status - Pending" over and over again.
Does anybody know what I should do? Should I create another container? Could there be something else wrong?
Thanks for reading
It seems this happens from time to time due to Azure problems. For anybody who encounters this, the perfectly adequate workaround is to use Sql Server Management Studio, select the database and Export Data-Tier Application.
Here is another post on the same issue:
http://social.msdn.microsoft.com/Forums/windowsazure/en-US/bb66505e-7008-4555-b8cd-eec9ae25066f/unable-to-export-db-via-portal?forum=ssdsgetstarted

SharePoint feature not getting activated by default

I have created a feature and i am auto activating it whenever 'My Site' gets created.
I am activating it for the template SPSMSITEHOST.
This feature changes the Picture URL property of User Profile.
Now, the problem is my feature gets activated but it seems it does not execute the code by default and and does not change the picture URL property.
When i deactivate the feature and activate the feature again then feature works absolutely fine as expected.
P.S: I am facing this issue on Production server, surprisingly this work fine on Staging server , i mean the same code !!
Any help ??
Thanks.
Sounds like something becomes out of sync on your production environment. Could it be caused by load balancing?
Are you doing this through STSADM commands?
I would stick the following line after after each command:
stsadm -o execadmsvcjobs
This will make sure processing for previous commands is done before moving on.
If thats way off then I would think its something to do with:
a) The way you're activating the feature... if you're using feature stapling, are you sure that the latest version of your stapling mechanism is in place?!
b) Assuming you have some sort of feature receiver in your code behind. Are you sure there isn't an error occurring thats being hidden by a try catch? If there is then you need to see what the exception is...
If it works when you deactivate/activate the feature, that almost eliminates security issues.
Hope this helps..
After long investigating and search for this problem i tried to rearrange the features at package file depending on the features dependencies, it seems SharePoint activate these features one by one as it's arranged in the package file and this is worked for me :)

deploying to sharepoint using the object model doesn't work reliably

Deploying to sharepoint using the object model or STSADM commands sometimes results in one or more packages being in the "error" state in the web control, a redeploy instantly fixes this, usually, even stranger, if i create two apps one which adds and one which deploys then i get no problems, but putting a delay between a single program does not have a similar effect.
If i run the deploy twice for programs which did not deploy successfully it works fine, as long as I do not try to do it programatically in which case it makes no difference.
It is different files and sometimes is none.
I do use stsadm -execadmsvcjobs between add and deploy and even between two of the deploy bunches.
(i'm deploying around 10 wsp files programatically)
Does anyone have any ideas on why this happens? or how to solve it, as when i get to implementations it causes problems.
The problem lies in the fact that sharepoint will perform app pool recycles and / or full iisresets, as well restarts of the SharePoint Timer Service (altough not completely sure about that though). When you try to actually deploy the just installed package sharepoint is still busy getting up and running again, the timer job created to install / deploy is basically waiting for the central admin app pool to be fully running again.
The same thing happens (somewhat reproducably) while retracting a solution. Hit F5 a lot of times on the solution management page while the retract process is underway and if you refreshed fast enough it will hang and display "error" in red.
My solution was to create a WebRequest to at least the central admin (or just do a SPSite = new SPSite("centraladminurl")) in your deployment app or in powershell. Do this after every deploy action as well.
This SHOULD fix the timing issue (basically a kind of "race condition").

Resources