Hgweb "Push" in IIS returning 502 (bad gateway) - iis

I've got hgweb up and running on II7 7 (on windows server 2008). The web interface works, and I can view, pull, and clone the repositories there. But I cannot push, doing so gives me a 502 error right after "searching for changes". Using --debug shows the last few lines as:
sending unbundle command
sending 622 bytes
HTTP Error: 502 (Bad Gateway)
I am using TortoiseHG to push, but the result is the same when using the mercurial command line.
I had followed the tutorial here: http://www.sjmdev.com/blog/post/2011/03/30/setting-mercurial-18-server-iis7-windows-server-2008-r2.aspx to setup hgweb.

Looks like an old question but someone is bound to come across it again. I was close to drawing a black circle on a wall and ... anyhow the issue for us was the way central repository was created. We cloned it from BitBucket while being Remote connected to the machine as local administrator.
The issue was in [Repository].hg folder. You need to set correct permissions on it. Try it with adding Everyone -> Full permissions for test purpose. Please make sure you change this to a dedicated network login or appropriate local account afterwards.
I was seeing the exact same behaviour - even push worked fine with exception of getting a Bad Gateway after all the time. After correct permissions were set the issue was gone.
Thinking about it now, probably the best solution is to add each network login that uses the repo to machine users and then set up access permissions to .hg folder to local users.
Hope it helps someone.

Try using the ISAPI module method instead of the CGI that executes phython.exe as documented here. There's also another related, and possibly duplicate question here as well.

Take a look at the 'Push_ssl' setting in your hgweb.config file.
I was getting the same error (had mine set to '*'), and was able to resolve it by removing the line entirely. Granted, this makes Mercurial somewhat less secure, but it lets me get by the configuration issue (for now) while I investigate properly configuring SSL on the server.
You may also have to review the 'Allow_push' setting in order to get past further errors (or take another look at your authorization).
NOTE: At least in my case, having 'push_ssl = false' wasn't enough as that resulted in further errors (authorization failed).
(Again this is simply a temporary solution until the server can be properly secured.)

It could happen by different reasons, to get more details about the error run
hg push --config ui.usehttp2=true --config ui.http2debuglevel=info
For example, problem may occur because of proxy server or just in case when the Mercurial Web Server "forgets" about repositories it needs to serve: in case if you are using TortoiseHg workbench go to Workbench UI, Repository -> Start Web Server, make sure that your repository is in the list of the served repos.

Try use https instead http in .hg/hgrc, I have resolve this problem for code.google.com.

I had this issue, and the problem ended up being the server running out of disk space.

Related

Timeout: WebException when trying to access/debug LocalHost of Azure Functions [Unity] [PlayFab]

I'm trying to debug Azure Function scripts locally, in conjunction with Unity, but getting Timeout errors every time.
I have a few things here, and I'm not sure which one is actually causing the problem... It might be a settings on Windows, as oppose to one of the softwares.
I'm building in Unity 2019.4, and and using PlayFab and it's ability to use Azure Functions. When I try to execute scripts from the Azure servers, it functions correctly. But when I try to run it with Local Debugging, I get WebException: The request timed out System.Net.HttpWebRequest.GetRequestStream (See full error below).
Here's what I'm doing to setup:
Set PlayFab to Local Debugging (via VS Code Extension)(and confirming the json file is made in the temp folder)
Install Azure Functions Core Tools from Here
Start Azure Functions debugging from VS Code (terminal output shows that the the localhost is running it correctly)
Timeout error references the correct address http://localhost:7071/api/CloudScript/ExecuteFunction as confirmed in the VS Code Terminal when the AzFunc debugging is started.
When I clone the project to my MacBook Pro, everything runs smoothly in local debugging.
So, because of this, I've tried checking to make sure ports aren't blocked via PowerShell: netsh firewall show state,and told Windows Defender to not block anything from Unity or Code. When I run Netstat -ab in PowerShell/CMD, I do get:
Can not obtain ownership information
TCP 0.0.0.0:7071 DESKTOP-COMPUTER:0 LISTENING
[func.exe]
TCP 0.0.0.0:7680 DESKTOP-COMPUTER:0 LISTENING
I don't know if this is a problem, or normal...
I don't even know what else to check for. This problem is beyond me. If anyone knows the solution, or can point me in right direction, I'd be very grateful!
Below are the two errors from the Unity log whenever I execute an Azure Function script through PlayFab while local debugging:
WebException: The request timed out
System.Net.HttpWebRequest.GetRequestStream () (at <14e3453b740b4bd690e8d4e5a013a715>:0)
PlayFab.Internal.PlayFabWebRequest.Post (PlayFab.Internal.CallRequestContainer reqContainer) (at Assets/PlayFabSDK/Shared/Internal/PlayFabHttp/PlayFabWebRequest.cs:319)
Rethrow as WebException: Timeout: WebException making http request to: http://localhost:7071/api/CloudScript/ExecuteFunction
UnityEngine.Debug:LogException(Exception)
PlayFab.Internal.PlayFabWebRequest:Post(CallRequestContainer) (at Assets/PlayFabSDK/Shared/Internal/PlayFabHttp/PlayFabWebRequest.cs:332)
PlayFab.Internal.PlayFabWebRequest:WorkerThreadMainLoop() (at Assets/PlayFabSDK/Shared/Internal/PlayFabHttp/PlayFabWebRequest.cs:252)
System.Threading.ThreadHelper:ThreadStart()
Timeout: WebException making http request to: http://localhost:7071/api/CloudScript/ExecuteFunction
UnityEngine.Debug:Log(Object)
DemoScript:onPlayFabError(PlayFabError) (at Assets/PlayFabPartySDK/Examples/DemoScript.cs:264)
PlayFab.Internal.<>c__DisplayClass30_0:<QueueRequestError>b__0() (at Assets/PlayFabSDK/Shared/Internal/PlayFabHttp/PlayFabWebRequest.cs:395)
PlayFab.Internal.PlayFabWebRequest:Update() (at Assets/PlayFabSDK/Shared/Internal/PlayFabHttp/PlayFabWebRequest.cs:480)
PlayFab.Internal.PlayFabHttp:Update() (at Assets/PlayFabSDK/Shared/Internal/PlayFabHttp/PlayFabHTTP.cs:364)
Okay, TLDR: The answer to the problem is that not everything was updated. So, update everything if you're experiencing the same problem.
More specifically in my case, the "Visual Studio Code Editor" asset in Unity's Package Manager.
I just wanted throw this out there in case anyone has a problem like this in the future. It may not be the same specific thing that needs upgrading, but search around for everything involved and make sure it's updated. Not just the big, obvious things (like Unity or your IDE). Thankfully for me in this case, the update was starting to cause other problems, and after much headbanging trying to solve those, I came across it.
Good luck, future fellow idiots!

Trivial Node.js via Passenger on DreamHost - Permission Denied

I tried setting up a do-nothing Node app, and it failed.
I developed some Node.js code offline in containers. I now want to try deploying it on DreamHost. I am doing it incrementally, adding features one by one. Starting with “Hello World” and going from there.
I set up a new subdomain and enabled Passenger. I was able to serve up an index.html file. I followed https://help.dreamhost.com/hc/en-us/articles/360029083351-Installing-a-custom-version-of-NVM-and-Node-js and installed Node and nvm (using the versions recommended in that artcle). I then installed a few packages I plan to use (most notably Express, the rest won’t come into play until later).
With just a Hello World app, that failed. The error message is below. But, I checked all the relevant files and they all have global read and execute permissions. I’m wondering if it is something else. I tried multiple Hello World examples for app.js, copied directly from different tutorials, none of which worked (but they do work locally). My more complex code also does not work, but that is the next step.
What am I missing? I followed the directions exactly. What other landmines do I have to look forward to? I really don’t want to spend time wrestling with infrastructure, I want it to “just work”, ideally.
An error occurred while starting the web application. It exited before signalling successful startup back to Phusion Passenger. Please read this article for more information about this problem.
Raw process output:
*** ERROR ***: Cannot execute /home/<user name>/.nvm/versions/node/v12.16.3: Permission denied (13)
Unclear what solved the issue.
Ran through changing the permissions on the files, as would seem obvious. Changed '/home/<user name>/.nvm/versions/node/v12.16.3' to '/home/<user name>/.nvm/versions/node/v12.16.3/bin/node' in the .htaccess file. Neither of those seemed to solve it.
Repeated the process again later. Followed it by `touch <webapp directory>/tmp/restart.txt' and it started working. I had been editing files in the web app's directory, so it isn't clear what touching that file did.

Version control on shared web host with Bazaar

I have a project I am going to begin co-developing on one of my web servers. Due to the nature of this kind of thing I'd like to have some version control going on. I've been searching all day for something that fits my needs and Bazaar seems the way to go, but I cannot figure out how to configure it.
My web host is Linux, without SSH (or SFTP as far as I can tell). I've read that you can use Bazaar in this situation to make a "dumb" server, but I can't for the life of me figure out how to configure, or find a guide. Everything out there requires either SSH/CLI access (both of which I don't have) or are too vague to follow. I am using the Windows GUI for Bazaar as well.
Can anyone either point me to a guide/instructions on how to do it, or post one here?
Edit Since Original Post
I have been trying to do several things since my original post. It might be that I am misunderstanding how bazaar is meant to work. What I want is to have my php files etc. on my web host (to which i do not have ssh access) so that myself and codevelopers can edit and test files without overwritting each other.
I initially tried to "start a new project" on my server via "ftp://user:pass#server" and it says that is successful. Then it prompts with a "Unable to open location" error saying "C:/ftp:/user:pass#server is not a brand, checkout, or repository.
Do you want to open it as a virtual repository, searching for nested locations?"
When I hit yes, it gives me an error "Unable to change to C:/ftp:/user:pass#server - closing page."
if I do the same thing with the "Open an existing location" option, it gives me the same error, except afterwards the Bazaar GUI hangs with "Not Responding" and needs to be killed.
Either way nothing is created that I can then interact with in Bazaar. If I create a local project and then push, it all seems to work. However, if I try to commit changes so I can push them I get an error "Bazaar has encountered an environmental error. Please report a bug if this is not the result of a local problem at https://bugs.launchpad.net/qbzr/+filebug including this traceback, and a description of what you were doing when the error occurred." the show details says "bzr: ERROR: Unable to determine your name.
Please, set your name with the 'whoami' command.
E.g. bzr whoami "Your Name ""
Before you can commit revisions, you need to set a name and email address. These are important metadata in a commit. You can set these in the Settings | Configuration | User Configuration menu. On the General tab enter the Name and E-mail fields. It's recommended to use real data in public projects, so that others who view your project can contact you in case they have questions. But it doesn't have to be real. This is a one-time initial setup.
As a next step, I would do a test to make sure you can really use your server over FTP, as a sanity check:
Commit a few revisions in your local repository, just so that you have something to push. It could be anything, doesn't matter.
Try a push to a URL in the format: ftp://user:pass#server/absolute/path/to/somewhere. In the example in your post you wrote ftp://user:pass#server, but it's important to have an absolute path there, like in this example.
If for some reason the push doesn't work well using the GUI, try it on the command line, for example:
bzr push ftp://user:pass#server/absolute/path/to/somewhere
This should really give an error message we can debug. In that case, paste the output into your question.
UPDATE
You said in comments that something was wrong with your name+email setting, and changing that resolved the problem. It would be nice to know what exactly was the problem there.
About bzr push to an FTP server, I double checked, this will never create the files on the server. From bzr push -h:
The target branch will not have its working tree populated because this
is both expensive, and is not supported on remote file systems.
Some smart servers or protocols may put the working tree in place in the future.
Over FTP it's a "dumb" server, so it definitely won't put the files there, only the .bzr directory, which is the repository and branch data. If you want to have the files there, I'm afraid you have to copy manually. There is a related bzr-push-and-update plugin, but it requires ssh access, which is not your case.

How to do remote staging in liferay 6.1.1 GA2?

I have a site when I tried to apply local staging it's worked fine,but we I tried to connect it through remote server it's not working giving error connection can't be established.Does any one tried it?
This is the configuration with the error message:
This blog post (disclaimer: my own) explains how to do it with https - you can omit long parts of it if you don't want encryption. It also covers 6.0, but the general principle is still the same.
You want to pay special attention to the paragraph Allow access to webservices in that article and check if your publishing server (the "stage") has access to the live server. In general, if this is not on localhost, it requires configuration as mentioned in that article.
As you indicate that you can't connect to your production server from your staging server, please check by opening a browser, running on the staging server and connect it to the production server - go to http://production-server-name:8080/api/axis and validate that you can connect (note: You get the authoritative result for this test only when not accessing localhost as the production system: Do run the browser on the staging system!) - with this test you can eliminate the first chance of your remote system being disallowed. Once this succeeds, you'll need credentials for the production server to be entered on the staging server - the account that you use needs to have permissions to change all the data it needs to change when publishing content (and pages etc.)
The error message you give in the added screenshot can appear when the current user on staging does not have access to the production system (with the credentials used) - verify that you have the same user account that you are using on your staging system (the one that gets the error message from the screenshot) in your production system. Synchronize the passwords of the two.
I your comment you give the information that you're using different version for the staging and the production environment - I don't expect that to work, so this might be the root cause. Test with both systems at the same version.
A couple important points to keep in mind with remote publishing:
If you're not on LDAP (or you have different LDAPs for different environments), you should validate that your user account is exactly the same in both source and target environments. So, if you're on the QA site and you want to remote publish to production, your screen name, email address, and password should all be the same.
Email address is uber important. Depending on which distribution (version) of Liferay you are on, the remote publish code uses your email address to irrespective of whether or not you have portal-ext.properties configured to use screenname.
You should have the Administrator role in on both sides. It may not be required in every scenario, but giving that role out to users that do remote publishing has saved me time and effort debugging why someone's remote publish didn't work. Debugging this process takes a very long time.
If remote publishing is causing you problems (and it probably is or you wouldn't be here), try doing lar file exports / imports. This is important since remote publish failures are not exactly helpful in telling you what failed, they just tell you then failed. Surprisingly, there are often problems in the export process and you can sometimes pinpoint some bad documents or a funky development thing you did using Global scope and portlet preferences that caused your RP to fail. I generally use this order in this situation a) documents and media [exclude thumbnails or your lar file will likely double in size, also exclude ranks if you're not using them] from the wrench icon in the control panel b) web content from the wrench icon in the control panel c) public pages [include data > web content display, but remove all the other data check boxes], include permissions, include categories d) private pages [same options as public pages].
If you already have Administrator role and it's saying you don't have permissions to RP to the remote site, setup your user on the target environment with the "Site Administrator" or "Site Owner" role.
A little late for first and foremost, but anytime you have something that's not working (remote publishing or otherwise), check the logs before you do anything else. The Liferay code base doesn't include a lot of helpful logging, but you do occasionally get a nugget of information that helps you piece together enough to do root cause analysis.
Cheers! HTH

CruiseControl.NET force build not working from CCTray

I really hope someone who is a CC.NET expert can help with this, because this problem is painful!
I have a remote build machine with CruiseControl.NET and CCTray running (version 1.5.7256.1)
On the local machine I have CCtray connecting through HTTP not .NET remoting.
When I configure the projects, I add a server through HTTP and use the following URL:
http://localhost/ccnet
If I leave [Set to pre-1.5.0 server] UNCHECKED, then it fails to connect with this error:
Failed to connect to server: The remote server returned an error: (500) Internal Server Error.
If I set to [Set to pre-1.5.0. server] CHECKED, then it succeeds and I can kick builds off from CCtray on the local machine fine.
Now, if I go back to another machine which I want to connect remotely, I follow same steps. Again, only pre-1.5.0 setting works. WHY?! CruiseControl.NET and CCTray are at 1.5.7256.1?!?
The 2nd problem and main problem which I think may be related to the previous; if I then use the pre-1.5.0 setting the pojects show up but when I force a build I now get this error:
An unexpected error has occurred while trying to force a build.
The method or operation is not implemented.
What am I doing wrong, I'm really struggling with this. I previously was using 1.4 versions and this worked fine, so has something broken? I'm using IIS7 too so don't know if this could be something to do with it.
I had a look at the tray app's source code for the 1.5 release (as well as the current trunk.) When connected to a pre-1.5 server, regardless of whether you specified the connection as remoting or HTTP, you will receive the unimplemented exception message when attempting to force build a project.
Looks like your options at the moment are to wait for a new release or pull down the code and modify (and I have no idea how easy backwards compatibility was maintained between versions...)
It sounds like you may have configuration options that are part of a breaking change perhaps? can you post more of your configuration so we can check it?
Also after you save changes have you looked at the server log? it often has information about what broke. Especially the part that happens right after you change a config file and save.
I'd be interested in seeing log file information. Also, why are you using http rather than remoting? Perhaps show us some of your settings in ccnet.exe.config? here's my remoting setup which I believe is the default:
<system.runtime.remoting>
<application>
<channels>
<channel ref="tcp" port="21234">
<serverProviders>
<formatter ref="binary" typeFilterLevel="Full"/>
</serverProviders>
</channel>
</channels>
</application>
</system.runtime.remoting>
also you may want to check security issues and firewall settings on that server. (windows event log for security audit failures, etc...)

Resources