Github sharing process failed - android-studio

Before answering my question please take into consideration that I'm new to Github and how it works. I'm having trouble sharing my project to Github. I'm receiving the errors:
The remote end hung up unexpectedly RPC failed; result=18, HTTP code = 200
The sharing process takes very long and it stops with this error. The project gets created on Github with no code. Other forums relating to this talk about GitBash. Note this is done through android studio.

As mentioned here
Specifically, the 'result=18' portion. This is the error code coming from libcurl, the underlying library used in http communications with Git. From the libcurl documentation, a result code of 18 means:
CURLE_PARTIAL_FILE (18)
A file transfer was shorter or larger than expected. This happens when the server first reports an expected transfer size, and then delivers data that doesn't match the previously given size.
You could try and increase the http buffer:
git config --global http.postBuffer 524288000
But first, make sure your local repo didn't include large binaries.
To be sure, check your .gitignore

Related

Azure Data Factory - Copy Activity - Read data from http response timeout

I use the Copy activity to query an HTTP endpoint, but after 5 minutes I keep getting the following error "Read data from response timeout":
Error Code: ErrorCode=UserErrorReadHttpDataTimeout,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Read data from http response timeout. If this is not binary copy, you are suggested to enable staged copy to accelerate reading data, otherwise please retry.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=System.Net.WebException,Message=The operation has timed out.,Source=System,'"
The request runs in full length without interruption on the server side (visible in the logs)
I searched online and I only found this:
The error gets triggered as soon as Reading from source hits 5 Minutes
PS: The error seems to be happening only on certain endpoints (same server but different endpoint, I don't get any timeout error)
Have any of you ever had a problem like this? If so, how did you solve it?
Thank you for you help !
Error Message - Read data from http response timeout. If this is not
binary copy, you are suggested to enable staged copy to accelerate
reading data, otherwise please retry.
As mentioned in the above error message, you need to try staged copy.
Also you need to configure Retry settings.
Refer - https://learn.microsoft.com/en-us/answers/questions/51055/azure-data-factory-copy-activity-retry.html

docker pull from Artifactory results in "net/http: request canceled" inconsistently

We are running Artifactory 5.11.0 (just update to 6.0.2 today and haven't yet seen this) in a docker container and when our automation executes a docker pull from Artifactory, 9/10 times it is successful. Sometimes, even when running the docker pull from the machine hosting Artifactory, the docker pull fails with:
Pulling 'docker.{artifactory url}/staging:latest'...
Error response from daemon: Get http://docker.{artifactory url}/v2/staging/manifests/latest: Get http://docker.{artifactory url}:80/artifactory/api/docker/docker/v2/token?account=admin&scope=repository%3Astaging%3Apull&service=docker.{artifactory url}%3A80:
net/http: request canceled (Client.Timeout exceeded while awaiting
headers)
Like I said, most of the time this is working perfect, but that 1/10 (probably less) we get the above error during our automated builds. I tried running the docker pull in a while loop over night until it hit a failure and there was no failure. Ran ping overnight and no packets were lost.
OS: Debian 9 x64
Docker version 17.09.0-ce, build afdb6d4 and seems to happen more frequently with Docker version 18.03.1~ce-0~debian, but I have no direct evidence to suggest the client is at fault.
Here is what JFrog provided me to try to resolve this issue. (Note: we were on an older version of Artifactory at the time and they did recommend that we update it to the latest as there were several updates that could help).
The RAM value -Xmx 2g was the default value provided by Artifactory. We can increase that value by going into the Docker container "docker exec -it artifactory bash"
and then $Artifactory_Home/bin/artifactory.default ( Mostly: - /opt/jfrog/artifactory/bin/artifactory.default) and we can change the RAM value accordingly. Please follow this link for more information.
We should also change the access max threads count and we can do that by going to $Artifactory_Home/tomcat/config/server.xml and change it to:
<Connector port="8040" sendReasonPhrase="true" maxThreads="<200>"/>
Also add below line in /var/opt/jfrog/artifactory/etc/artifactory.system.properties
artifactory.access.client.max.connections=200
To deal with heavy loads we need to append the below line in /var/opt/jfrog/artifactory/etc/db.properties.Please follow this link for more information.
pool.max.active=200
Also, they told me to be sure that we were using the API Key when authenticating the docker client with Artifactory instead of user/pass login since the latter will go through our ldap authentication and the former will not:
One thing to try would be to use an API Key instead of the plain text password, as using an API key will not reach out to the LDAP server.
We were already doing this, so this had no impact on the issue.
Also posted here: https://www.jfrog.com/jira/browse/RTFACT-17919?focusedCommentId=66063&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-66063
I hope this helps as it helped us.

Uninitialized Constant on custom Puppet Function

I've got a function that I'm trying to run on my puppetmaster on each client run. It runs just fine on the puppetmaster itself, but it causes the agent runs to fail on the nodes because of the following error:
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: uninitialized constant Puppet::Parser::Functions::<my_module>
I'm not really sure why. I enabled debug logging on the master via config.ru, but I see the same error in the logs with no more useful messages.
What other steps can I take to debug this?
Update:
Adding some extra details.
Puppet Community, with Foreman connected Puppetmaster running on Apache2 with Passenger / Rack
Both client and master are running Puppet 3.7.5
Both client and master are Ubuntu 14.04
Both client and master are using Ruby 1.9.3p484 (2013-11-22 revision 43786) [x86_64-linux]
Pluginsync is enabled
The custom function works fine on the puppetmaster when run run as part of the puppetmaster's manifest (it's a client of itself) or using puppet apply directly on the server.
The function is present on the clients, and when I update for debugging purposes I do see the file appear on the client side.
I cannot paste the function here unfortunately because it is proprietary code. It does rely on the aws-sdk, but I have verified that that ruby gem is present on both the client and the master side, with the same version in both places. I've even tried surrounding the entire function with:
begin
rescue LoadError
end
and have the same result.
This is embarrassingly stupid, but it turns out I had somehow never noticed that I hadn't actually included this line in my function:
require 'aws-sdk'
And so the error I was receiving:
uninitialized constant Puppet::Parser::Functions::Aws
Was actually referring to the AWS SDK being missing, and not a problem with the puppet module itself (which was also confusingly named aws), which is how I was interpreting it. Basically, I banged my head against the wall for several days over a painfully silly mistake. Apologies to all who tried to help me :)

Ghost (NodeJS blog) on Azure: Periodic 500 error troubleshooting

Background / Issue
Having a strange issue running a Ghost blog on Azure. The site seems to run fine for a while, but every once in a while, I'll receive a 500 error with no further information. The next request always appears to succeed (in tests so far).
The error seems to happen after a period of inactivity. Since I'm currently just getting set up, I'm utilizing an Azure "Free" instance, so I'm wondering if some sort of resource conservation is causing it behind the scenes (which will be allevaited when I upgrade).
Any idea what could be causing this issue? I'm sort of at a loss for where to start since the logs don't necessarily help me in this case. I'm new to NodeJS (and nodeJS on Azure) and since this is my first foray, any tips/tricks on where to look would be helpful as well.
Some specific questions:
When receiving an error like this, is there anywhere I can go to see any output, or is it pretty much guaranteed that Node actually didn't output something?
On Azure free instances, does some sort of resource conservation take place which might cause the app to be shut down (and thus for me to see these errors only after a period of inactivity)?
The Full Error
The full text of the error is below (I've turned debugging on for this reason):
iisnode encountered an error when processing the request.
HRESULT: 0x2
HTTP status: 500
HTTP reason: Internal Server Error
You are receiving this HTTP 200 response because system.webServer/iisnode/#devErrorsEnabled configuration setting is 'true'.
In addition to the log of stdout and stderr of the node.exe process, consider using debugging and ETW traces to further diagnose the problem.
The node.exe process has not written any information to stderr or iisnode was unable to capture this information. Frequent reason is that the iisnode module is unable to create a log file to capture stdout and stderr output from node.exe. Please check that the identity of the IIS application pool running the node.js application has read and write access permissions to the directory on the server where the node.js application is located. Alternatively you can disable logging by setting system.webServer/iisnode/#loggingEnabled element of web.config to 'false'.
I think it might be something in the Azure web config rather than Ghost itself. So look for logs based on that because Ghost is not throwing that error. I found this question that might help you out:
How to debug Azure 500 internal server error
Good luck!

How to connect pinoccio to apache couchdb

Is there anyone using the nice pinoccio from www.pinocc.io ?
I want to use it to post data into an apache couchdb using node.js. So I'm trying to poll data from the pinnocio API, but I'm a little lost:
schedule the polls
do long polls
do a completely different approach
Any ideas are welcome
Pitt
Sure. I wrote the Pinoccio API, here’s how you do it
https://gist.github.com/soldair/c11d6ae6f4bead140838
This example depends on the pinoccio npm module ~0.1.3 so make sure to npm install again to pick up the newest version.
you don't need to poll because pinoccio will send you changes as they happen if you have an open connection to either "stats" or "sync". if you want to poll you can but its not "real time".
sync gives you the current state + streams changes as they happen. so its perfect if you
only need to save the changes to your troop while your script is running. or show the current and last known state on a web page.
The solution that replicates every data point we store is stats. This is the example provided. Stats lets you read everything that has happened to a scout. Digital pins for example are the "digital" report. You can ask for data from a specific point in time or just from the current time (default). Changes to this "digital" report will continue streaming live as they happen, until the "end" time is reached, or if "tail" equals 0 in the options passed to stats.
hope this helps. i tested the script on my local couch and it worked well. you would need to modify it to copy more stats from each scout. I hope that soon you will be able to request multiple reports from multiple scouts in the same stream. i just have some bugs to sort out ;)
You need to look into 2 dimensions:
node.js talking to CouchDB. This is well understood and there are some questions you can find here.
Getting the data from the pinoccio. The API suggests that as long as the connection is open, you get data. So use a short timeout and a loop. You might want to run your own node.js instance for that.
Interesting fact: the CouchDB team seems to work on replacing their internal JS engine with node.js

Resources