I was testing out the threepenny-gui package for Haskell gui programming. Following instructions here, I did:
$ cd threepenny-gui-0.4.1.0/samples/
$ runhaskell.exe Chat.hs
and got:
Listening on http://0.0.0.0:8023/
[29/Apr/2014:11:37:44 -0400] Server.httpServe: START, binding to [http://0.0.0.0
:8023/]
But nothing happens after then. No browser is fired up. Also, if I open firefox and go to http://0.0.0.0:8023/ , it says Unable to connect. I've turned off windows firewall, but it didn't improve anything.
Did I missing anything here?
Strange. I tested http://127.0.0.1:8023 a minute later instead of the hinted address http://0.0.0.0:8023/, and the gui is visible from the browser immediately. Don't know why the wrong address was suggested.
Related
I'm trying to debug Azure Function scripts locally, in conjunction with Unity, but getting Timeout errors every time.
I have a few things here, and I'm not sure which one is actually causing the problem... It might be a settings on Windows, as oppose to one of the softwares.
I'm building in Unity 2019.4, and and using PlayFab and it's ability to use Azure Functions. When I try to execute scripts from the Azure servers, it functions correctly. But when I try to run it with Local Debugging, I get WebException: The request timed out System.Net.HttpWebRequest.GetRequestStream (See full error below).
Here's what I'm doing to setup:
Set PlayFab to Local Debugging (via VS Code Extension)(and confirming the json file is made in the temp folder)
Install Azure Functions Core Tools from Here
Start Azure Functions debugging from VS Code (terminal output shows that the the localhost is running it correctly)
Timeout error references the correct address http://localhost:7071/api/CloudScript/ExecuteFunction as confirmed in the VS Code Terminal when the AzFunc debugging is started.
When I clone the project to my MacBook Pro, everything runs smoothly in local debugging.
So, because of this, I've tried checking to make sure ports aren't blocked via PowerShell: netsh firewall show state,and told Windows Defender to not block anything from Unity or Code. When I run Netstat -ab in PowerShell/CMD, I do get:
Can not obtain ownership information
TCP 0.0.0.0:7071 DESKTOP-COMPUTER:0 LISTENING
[func.exe]
TCP 0.0.0.0:7680 DESKTOP-COMPUTER:0 LISTENING
I don't know if this is a problem, or normal...
I don't even know what else to check for. This problem is beyond me. If anyone knows the solution, or can point me in right direction, I'd be very grateful!
Below are the two errors from the Unity log whenever I execute an Azure Function script through PlayFab while local debugging:
WebException: The request timed out
System.Net.HttpWebRequest.GetRequestStream () (at <14e3453b740b4bd690e8d4e5a013a715>:0)
PlayFab.Internal.PlayFabWebRequest.Post (PlayFab.Internal.CallRequestContainer reqContainer) (at Assets/PlayFabSDK/Shared/Internal/PlayFabHttp/PlayFabWebRequest.cs:319)
Rethrow as WebException: Timeout: WebException making http request to: http://localhost:7071/api/CloudScript/ExecuteFunction
UnityEngine.Debug:LogException(Exception)
PlayFab.Internal.PlayFabWebRequest:Post(CallRequestContainer) (at Assets/PlayFabSDK/Shared/Internal/PlayFabHttp/PlayFabWebRequest.cs:332)
PlayFab.Internal.PlayFabWebRequest:WorkerThreadMainLoop() (at Assets/PlayFabSDK/Shared/Internal/PlayFabHttp/PlayFabWebRequest.cs:252)
System.Threading.ThreadHelper:ThreadStart()
Timeout: WebException making http request to: http://localhost:7071/api/CloudScript/ExecuteFunction
UnityEngine.Debug:Log(Object)
DemoScript:onPlayFabError(PlayFabError) (at Assets/PlayFabPartySDK/Examples/DemoScript.cs:264)
PlayFab.Internal.<>c__DisplayClass30_0:<QueueRequestError>b__0() (at Assets/PlayFabSDK/Shared/Internal/PlayFabHttp/PlayFabWebRequest.cs:395)
PlayFab.Internal.PlayFabWebRequest:Update() (at Assets/PlayFabSDK/Shared/Internal/PlayFabHttp/PlayFabWebRequest.cs:480)
PlayFab.Internal.PlayFabHttp:Update() (at Assets/PlayFabSDK/Shared/Internal/PlayFabHttp/PlayFabHTTP.cs:364)
Okay, TLDR: The answer to the problem is that not everything was updated. So, update everything if you're experiencing the same problem.
More specifically in my case, the "Visual Studio Code Editor" asset in Unity's Package Manager.
I just wanted throw this out there in case anyone has a problem like this in the future. It may not be the same specific thing that needs upgrading, but search around for everything involved and make sure it's updated. Not just the big, obvious things (like Unity or your IDE). Thankfully for me in this case, the update was starting to cause other problems, and after much headbanging trying to solve those, I came across it.
Good luck, future fellow idiots!
I have my work computer which is a Windows 10 Pro and my laptop is a Windows 10 Home. Working on the same project on both: push and pull to Git. Learning React through Udemy. Both computers using Chrome. Both using Bash on Ubuntu on Windows with latest updates. Both using ConEmu for the console. Both npm -v = 3.10.10. Both node -v = 6.11.2. Hardware is different obviously, but not sure that is relevant and worth listing.
Anyway, this starter project I am playing around with, when I make changes to it and npm start is running, you can see activity in the console, hit refresh in the browser, and any changes made will be reflected.
On the laptop, this process does not work. Make change, save, no activity in console, refresh in browser does not reflect the changes. Have to restart npm start for changes to be reflected. A little irritating to say the least.
Anyway idea what might cause this? Really haven't come across anything in my Googling efforts.
If you are using npm in WSL2.0 for development, please refer the last point in this-
https://create-react-app.dev/docs/troubleshooting/#npm-start-doesnt-detect-changes
While WSL1.0 doesn't use a VM, WSL2.0 does use a lightweight VM, so adding
CHOKIDAR_USEPOLLING=true
in a .env file in the project directory fixed the problem.
On a sidenote, you might wanna take a look at this
Client side
To ensure client side changes aren't being cached, you can open devtools > Network, and check "Disable cache". After enabling this, you won't have anything in the cache as long as devtools is open.
Alternatively, you can use incognito / private browsing mode to prevent the cache from holding on to things.
Server side
I'm sure you've realized that it's a pain to restart your server every time you want to see your code update. There are several tools that will detect file changes and handle restarting the server automatically.
PM2
Nodemon
Forever
I just add file .env and inside FAST_REFRESH=false.
For me, working in Windows, WSL2 caused this not to work. Running npm start in Command Prompt, not WSL solved this issue for me.
Reminder: Arch Linux uses pacman not apt-get
So I had an idea that I wanted to be able to leave my room and still see the progress of a download from my phone. I have looked for preexisting programs but have found none, so I decided to write a program myself.
the first step I took was reading the pacman documentation, to see if a function that could get the current download status was. I know there is a file I can check to see if exists
/var/lib/pacman/db.lck
which would tell me if there is a download
however I wanted to know more specifics on the download - progress and time remaining, name of download.
I have also found some GUI programs that use pacman and I was thinking of getting the source code to see if I could use some of that, but haven't found anything useful.
is there a way to find out the specifics about a current download, other than looking at the terminal that the command is running on?
Why overcomplicate things? Just install "screen" via pacman, and start the pacman update inside a screen.. And from your smartphone, use a SSH client to connect to your local machine and access that screen.
You could setup an ssh server on your host machine and connect to it using a terminal emulator on your phone (termux for example) and run whatever commands you like from there. This way you'll be able to view all terminal output from your phone quite seamlessly.
I have recently spun up a new Ubuntu 12.04 instance in AWS. I had no issues connecting to and opening an SSH terminal to the server. Having connected to the instance, I was able to install the Ubuntu desktop and FreeNX without any problem, as well as enabling password authentication on the server instance.
I downloaded and installed the NX Client for Windows on a PC running Windows 8. After entering the user credentials I can connect to and authenticate into the server. I'm brand new to the Linux world, but at this point everything was going so smoothly I was about ready to throw my Windows licenses to the dogs - good thing I held off on that.
"Problem: At the client, the !M logo window appears, but after a few seconds that window just closes, even without showing any error message."
That problem statement is in quotes because it's precisely the issue described in FreeNX Ubuntu Community support documentation https://help.ubuntu.com/community/FreeNX#Troubleshooting.
So naturally I follow the solution in the guide:
"Solution: The issue is due custom VNC configuration. In the server, access your home directory and run these commands,"
sudo rm .Xauthority*
touch .Xauthority
chmod 600 .Xauthority
Unfortunately, this did absolutely nothing to resolve the issue. The problem would be easier to diagnose if I had an error message, but reference the Problem statement, there is no error message to be had. Several hours of googling yielded nothing, so wondering if anyone here has encountered this problem in the past, and if so, they would be willing to help.
Thanks!
I have a suite of tests for our website using Sahi. These tests are automated and feed into our Jenkins build system.
The tests run on a dedicated PC that is used for nothing else. It has Sahi plus all the browsers installed. The Jenkins server makes a remote call to the testing PC to run the tests. Due to the time it takes to run all the tests, this functional test suite is run overnight.
For several months this system was all working beautifully. But suddenly one day a few weeks ago I came into the office and found that all the tests had failed. they haven't worked since. As far as I know, nothing significant has changed (we obviously keep the browser versions up-to-date, but I don't think the failure co-incided with any updates; Sahi itself hasn't had an update since last year)
I've done some work to find out what's happening:
Sahi uses a proxy as part of it's browser control magic, and I believe that this proxy is the source of the problem. But I can't work out how or why.
When the browser under Sahi's control loads the page to be tested, it seems none of the HTTP requests are succeeding. The raw page content is shown (I think because it's cached), but none of the styles, graphics or scripts (except those already cached by the browser). Furthermore, the Sahi script then tries to click on a button to proceed through the test, but the browser fails to load anything. Sahi waits for a bit, but eventually the script times out and the test fails.
I can replicate this on the affected PC when running Sahi manually. It happens on any site, and in all browsers. However it doesn't happen on my own desktop PC, which has the same versions of all the relevant software installed. And of course, it worked fine in the past on the test box.
I have tried uninstalling Sahi and the browsers, and re-installing from scratch. This has not made any difference. (I appreciate that uninstalling often doesn't actually delete everything, so perhaps there's more I could do here?)
I'm really hoping someone can help me here, because I'm unsure what else to try.
Many thanks in advance.
Since it happens on all browsers, it looks like a firewall setting may be preventing access to port 9999. Turn off the firewall and check. If you see any exceptions on the Sahi console, you could post that too.
I wasn't able to get to the bottom of this. I suspect it was something blocking the Sahi proxy from working, but I couldn't find the culprit.
I'd wasted too much time on it, and the machine it was running on wasn't being used for anything else anyway, so my final solution to this was to re-install the OS from scratch.
This has solved the problem. It hasn't helped me understand why it happened, but as long as it's working, eh?
Thank you to everyone who stopped by.