Weird Bug with Files from GitHub Causing CommandBox to be Unable to Start - lucee

OS: Ubuntu 22.04 | CommandBox Version: 5.5.2 | Lucee Engine: 5.3.9+141
Having a really strange issue. I've installed CommandBox from scratch and am using the Lucee engine. Everything works fine until it's time to pull my web files from GitHub. Initially, all the files are served properly, but upon restarting the service, it is unable to start. I've tried a couple of things (changing user/group ownership, copying the files manually, even changing file permissions for the folder and everything inside) but it fails to start every single time.
I'm able to bring it up by deleting the web root folder and recreating it. I'm also able to run files that I create locally with echo/touch no problem. Kind of at a loss here as to where to go from here.

It sounds like the number of files in your web root are causing the XNIO directory watcher to take some time starting up. you can disable it like so:
server set runwar.args=['--resource-manager-file-system-watcher=false']
And the restart the server. Please note, this setting will actually be disabled by default in the next version of CommandBox due to occasional issues like this.

Related

Azure Functions Core tools suddenly stopped working

I was in the middle of working on some minor code changes when all of a sudden I started getting the following error on startup:
A host error has occurred during startup operation '78d5d8fd-e81c-4707-87ca-6b801430fef1'.
[2021-01-08T13:02:40.279Z] System.IO.FileSystem: Could not find a part of the path 'C:\Users\schiefaw\AppData\Local\AzureFunctionsTools\Releases\3.17.0\workers'.
I looked at the path and found everything exists until I get to "workers".
I, of course, assumed it was something I did, so I backed out all changes to no effect. Then I uninstalled visual studio and all Azure products I could find and reinstalled to no effect. I created a new user (since the file it is looking for is in my user folder) to no effect.
I then created an entirely new instance of a windows virtual machine and installed the development environment to no effect (same error).
I am completely stuck on this. Does anyone have any ideas on what I can try next?
Thanks!
I think this is a bug from that 3.17 release. But here is a work-around: you can add the "workers" folder (empty folder) and it should work. Another way if you have a copy of the previous version (such as 3.16.x), you can copy the content to the 3.17.0 one.
You can read more here: https://developercommunity.visualstudio.com/content/problem/1304718/azure-functions-local-debugging-broke-with-3170-up.html

Error loading extension with localization

My nw.js app suddenly stopped working on Windows 10 with the following error;
Failed to load extension from {path}. Default locale is defined but default data couldn't be loaded.
Structure & manifest
_locales
en
js/i18n.js
Manifest
"default_language": "en"
I don't know what windows has changed recently but it has been working solidly on previous versions of Windows for years. I've updated the country tag as per available language packs for windows here and chromium tags but still no luck.
According to this thread :
"I use Chrome and stopped updating it once they made tabbed-options
mandatory. I also keep my User Data folder in a non-default location.
When this bug started, I used the --single-process trick for a while
but as mwalsher said, it stopped working when they messed with the Web
Store. I used but hate the manual method outlined above, so what I did
was to simply move my User Data folder to a FAT32 partition. Problem
solved; now I can successfully install packed extensions from an older
version of Chromium, running in normal mode, to a non-default User
Data folder. Even better, thanks to a system I set up
(http://superuser.com/questions/196886/how-to-relocate-chrome-profile-but-also-make-new-links-open-with-the-relocated-p/257706#257706),
it was /extremely/ easy to change it (I had only to change a single
byte and reboot)."
..
"Change the security permissions of the temp directory might fix this
problem. On my computer, the temp directory only has 3 full control
user (My Account, System, Administrators) at beginning. I manually
give everyone full control to this folder (maybe adding list
permission only also works). However it doesn't work immediately,
until next day I restart the computer with great surprise.
..
As a workaround, --no-sandbox might work. Note that this is just as
unsafe as --single-process, so be careful when using it.
..
..
perform a "chrome://restart
Try this first:
..I restarted Chrome and tried to install it again, and now it
installed cleanly..

npm start not refreshing new content on save on one computer, but is on another with almost exact same setup

I have my work computer which is a Windows 10 Pro and my laptop is a Windows 10 Home. Working on the same project on both: push and pull to Git. Learning React through Udemy. Both computers using Chrome. Both using Bash on Ubuntu on Windows with latest updates. Both using ConEmu for the console. Both npm -v = 3.10.10. Both node -v = 6.11.2. Hardware is different obviously, but not sure that is relevant and worth listing.
Anyway, this starter project I am playing around with, when I make changes to it and npm start is running, you can see activity in the console, hit refresh in the browser, and any changes made will be reflected.
On the laptop, this process does not work. Make change, save, no activity in console, refresh in browser does not reflect the changes. Have to restart npm start for changes to be reflected. A little irritating to say the least.
Anyway idea what might cause this? Really haven't come across anything in my Googling efforts.
If you are using npm in WSL2.0 for development, please refer the last point in this-
https://create-react-app.dev/docs/troubleshooting/#npm-start-doesnt-detect-changes
While WSL1.0 doesn't use a VM, WSL2.0 does use a lightweight VM, so adding
CHOKIDAR_USEPOLLING=true
in a .env file in the project directory fixed the problem.
On a sidenote, you might wanna take a look at this
Client side
To ensure client side changes aren't being cached, you can open devtools > Network, and check "Disable cache". After enabling this, you won't have anything in the cache as long as devtools is open.
Alternatively, you can use incognito / private browsing mode to prevent the cache from holding on to things.
Server side
I'm sure you've realized that it's a pain to restart your server every time you want to see your code update. There are several tools that will detect file changes and handle restarting the server automatically.
PM2
Nodemon
Forever
I just add file .env and inside FAST_REFRESH=false.
For me, working in Windows, WSL2 caused this not to work. Running npm start in Command Prompt, not WSL solved this issue for me.

How to avoid restarting server every time I need to make a React change?

I'm having an issue in my React environment where I must restart ('npm start') my server every time I want to view an update in the browser. Others seem to be able to simply refresh the browser without the need to restart their servers.
For instance, if I make an update in one of the React Components I can't simply refresh the web page, I have to restart the entire server.
Any suggestions how to fix this issue so I don't need to restart every time?
This problem was fixed once I moved my application out of my Dropbox directory.
Once I moved the application out of the Dropbox directory I no longer needed to manually restart the server when I made any edit to a React component. Note that the application does work just fine and auto-refresh using Google Drive
(linked to the cloud) or a general non-cloud linked folder on my HD.
I was getting the same problem using Visual Studio Code. When I made changes nothing was showing up. VC gave me a hint by saying
"Visual Studio Code is unable to watch for file changes in this large workspace"
so I found these instructions which solved the problem. Could be related.
When you see this notification, it indicates that the VS Code file watcher is running out of handles because the workspace is large and contains many files. The current limit can be viewed by running:
cat /proc/sys/fs/inotify/max_user_watches
The limit can be increased to its maximum by editing /etc/sysctl.conf and adding this line to the end of the file:
fs.inotify.max_user_watches=524288
The new value can then be loaded in by running sudo sysctl -p. Note that Arch Linux works a little differently, See Increasing the amount of inotify watchers for details.
Check out Create-React-App by Facebook. It has all the essential tools you'll need when developing React apps.
I use a combination of Webpack to bundle the js code and Nodemon for server restart. They both have watch functionality so they can watch the code to see if anything has changed.
Seems to be an norm from my research.

VS2012 cannot copy DeploymentItem to test output folder

I have a project that is written in MSTest. I have 3 machines that has VS2012 Ultimate Update 4 installed. But with this project, on one of my 3 machines, the DeploymentItem are not copied to the output folder which in turn causes unit test failure. The other two machines are fine with the same project. I am using TFS as source control system. Can someone help me fix this issue?
Update: I have given up, this seems to be an issue of VS2012 installation itself cause the same project can run tests fine on other machines
Do you have a different test project on that machine that points to the same output folder?
According to this thread, one of the projects could fail to output in that case.
It could be that you excluded the extra project on the working machines, or that the order in which the projects are built and deployed is different. Are there any other differences (like number of processor cores) between the machines?
Another cause could be differences in user rights on the different machines (depending on the destination directory you are deploying to). Try starting Visual studio on the failing machine by right-clicking on the icon and choosing run as administrator. Does that make any difference?
Since it is working on your other machines, I guess the copy to output directory properties are already true.
This is a wired one and I encountered a similar issue most recently where the build and output to folder was successful in few machines while it was failing in others. My web project was referencing assemblies from the GAC and a folder located in the relative path inside the solution.
Machines where the output was failing, I was receiving this error in the output something like --> Can't locate or access assemblies in the path..
I resolved the issue by
Removing all the assemblies that were referenced form the relative path
(optional) Manually removed the debug and release folders in both bin and obj folders for all projects. (Probably a clean solution option in the build menu will work as well here, but I avoid taking risk and wanted to be sure)
Added back the assemblies from the local path.
Re-build the project
Run the test project and everything was working fine in all machines.
Hope this helps !!!
It turns out it's my own fault. I didn't set the Test settings in the "Test -> Test Settings" menu. But how could I know? VS2012 on my other machines are all configured automatically. For some reason, that particular machine didn't. So there you have it, the answer to the question. It's a simple one. But when you don't know, it's utterly hard.

Resources