Inside a PHP application I'm trying to replicate 4 DBs in and out: this is only happening with one of those replications: database's name is "people". To avoid any PHP library specific issue, I'm testing from bash running curl:
curl -H 'Content-Type: application/json' -X POST LOCAL_PATH/_replicate -d '{"source":"REMOTE_PATH/people","target":"LOCAL_PATH/people", "continuous":false}'
With this output:
{"error":"checkpoint_commit_failure","reason":"Error updating the source checkpoint document: conflict"}
I've checked this post, but it doesn't seem to be that, as we're using full paths for replication (both local and remote).
This happens most of the times, but not always.. Any idea???
CouchDB stores check points in the source database server for the last sequence id it was able to replicate. Therefore the credentials that you're using to replicate from the source server with also need write permission on the source database to write these check points.
However, this is not strictly necessary because check points are an optimization. Your docs will replicate just fine without these check points.
Related
I am following this guide on setting up dsbulk: https://docs.datastax.com/en/dsbulk/doc/dsbulk/dsbulkSimpleLoad.html
I'm getting confused at this part:
dsbulk load -url export.csv -k ks1 -t table1 \
-b "path/to/secure-connect-database_name.zip" \
-u database_user -p database_password -header true
Where is that secure-connect-database_name.zip or how should i generate this?
I'm not all keen on using the method above so if there would be a way to just pass all the parameters in a command, that would work better for me.
Please note that the first line is something specific to DataStax Astra. If you're loading to an Astra instance, you would find the secure connect bundle downloadable from the database dashboard in Astra console.
If you are using DS Bulk for Cassandra, DSE, or any other compatible API, you do not need to be concerned with the secure connect bundle. You should be able to pass every parameter you need on the command line, or written in a config file.
Thanks for your comment. Updated the docs, adding an Important note with info about the secure connect bundle ZIP and a related topic link. Might need to refresh your browser view to see the updates.
https://docs.datastax.com/en/dsbulk/doc/dsbulk/dsbulkSimpleLoad.html
https://docs.datastax.com/en/dsbulk/doc/dsbulk/dsbulkSimpleUnload.html
I want to get a list of issues that have been created in a specific date range using the Github enterprise api. What I want to do would be the equivalent of doing a search on the issues page as shown on the image below:
I have tried the following command: curl -H "Authorization: token myToken" "https://github.mydomain.com/api/v3/repos/owner/repo/issues?state=all&since=2015-09-01" > issues.json but that does not give me what i need because the parameter since according to the Api docs is described as:
Only issues updated at or after this time are returned. This is a timestamp in ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ
Thanks in advance!
So after lots of googling and reading through the Github API docs I figured it out. What i needed for this was the Github Search API. The first thing i did was figure out what endpoints where available to me on my enterprise API as described in this stackoverflow post. So I used the following command to do that:
curl -H "Authorization: token [myToken]" "https://github.mydomain.com/api/v3/"
One of the endpoints returned in the response was:
"issue_search_url": "https://github.mydomain.com/api/v3/search/issues?q={query}{&page,per_page,sort,order}"
Using that endpoint, I constructed the following command that gave me what I needed:
curl -H "Authorization: token [myToken]" "https://github.mydomain.com/api/v3/search/issues?page=1&per_page=100&sort=created&order=asc&q=repo:[Owner]/[RepoName]+is:issue+created:>=2015-09-01"
Let's break down the parameters (anything after the ? sign):
page=1&per_page=100: The default number of results for this request is 30 per page. In my case I had 664 results. So I needed to do multiple request specifying which page (page=1) and how many results I wanted for that request (per_page=100) until i got all of them. In my case i did 7 request with the above url each time changing the page number. For more info see the Github docs on Pagination
&sort=created&order=asc: Sor by the created date in ascending order (oldest first). See Github Search API and Searching Issues
q=repo:[Owner]/[RepoName]+is:issue+created:>=2015-09-01: Form a search query (q=) that limits the search to issues (is:issue) created from 2015-09-01 and on (created:>=2015-09-01) in the repo Owner/Name (repo:[Owner]/[RepoName])
Hope this helps others as I have found that the Github api docs are not very clear.
I'm trying to get all of the pull requests for a given repo. The GitHub API paginates results such that you cannot get all the results at once. In the documentation, they say that getting all of the results will require knowing how many pages there are. They say you can learn how many pages there are by getting the Link response header, which you should be able to get with curl -I https://api.github.com/repos/rails/rails, for instance. But, while that works for the rails repository, it does not work for the repo that I need: /lodash/lodash. When I run the same command with lodash, I get:
curl -I https://api.github.com/repos/lodash/lodash/pulls
HTTP/1.1 200 OK
...
Access-Control-Expose-Headers: ETag, Link, X-GitHub-OTP, X-RateLimit-
Limit,...
...
In other words, Link is an Access-Control-Expose-Header for the lodash repository. I haven't been able to find any information on how to get it, given that.
So I believe the crux of my question is "How do I get an Access-Control-Expose-Header?" but I wanted to provide context in case there is another way of getting all pull requests.
As for today, there is no opened pull request for repository lodash, so you will have no result.
From Github API, default state is open when you retrieve pull requests :
Either open, closed, or all to filter by state. Default: open
Applying a filter that gives more pages will give you the Link header :
curl -I https://api.github.com/repos/lodash/lodash/pulls?state=all
I am working on a project using CouchDB and the partition or storage dedicated for the couchdb files reached its maximum capacity that the site fails to connect with CouchDB and produces connection errors. I know that couch is storage hungry but I never expect this soon. I have tried compacting methods such as:
curl -H "Content-Type: application/json" -X POST http://localhost:5984/dbname/_compact
curl -H "Content-Type: application/json" -X POST http://localhost:5984/dbname/_view_cleanup
and,
localhost:5984/dbname/_compact/all_view_documents
The above commands only released 2GB of storage. As I searched which files in the partition consumes most of the storage, I found out that in a particular folder /usr/local/var/lib/couchdb/.dbname/mrview found a view file that is 144GB in size is in still there even when I used to compact all view documents.
Note: the compacted document file / database file is only 1.6GB, the total partition storage is 150GB
I encountered this problem too.
Apparently these are files that increase when the design views are modified because the indexes are recomputed
my test was to delete the files and functionally testing my application
were re-create some of these files but with a much smaller
could assume that there is no problem in deleting and that this increase in size should not worry unless we change layout views
http://www.staticshin.com/programming/does-updating-a-design-document-in-couchdb-cause-rebuilding-of-views/
I am trying to upload a file from linux to sharepoint with my sharepoint login credentials.
I use the cURL utility to achieve this. The upload is successful.
The command used is : curl --ntlm --user username:password --upload-file myfile.txt -k https://sharepointserver.com/sites/mysite/myfile.txt
-k option is used to overcome the certificate errors for the non-secure sharepoint site.
However, this uploaded file is showing up in "checked out" view(green arrow) in sharepoint from my login.
As a result, this file is non-existent for users from other logins.
My login has the write access previlege to sharepoint.
Any ideas on how to "check in" this file to sharepoint with cURL so that the file can be viewed from anyone's login ?
I don't have curl available to test right now but you might be able to fashion something out of the following information.
Check in and check out is handled by /_layouts/CheckIn.aspx
The page has the following querystring variables:
List - A GUID that identifies the current list.
FileName - The name of the file with extension.
Source - The full url to the allitems.aspx page in the library.
I was able to get the CheckIn.aspx page to load correctly just using the FileName and Source parameters and omitting the List parameter. This is good because you don't have to figure out a way to look up the List GUID.
The CheckIn.aspx page postbacks to itself with the following form parameters that control checkin:
PostBack - boolean set to true.
CheckInAction - string set to ActionCheckin
KeepCheckout - set to 1 to keep checkout and 0 to keep checked in
CheckinDescription - string of text
Call this in curl like so
curl --data "PostBack=true&CheckinAction=ActionCheckin&KeepCheckout=0&CheckinDescription=SomeTextForCheckIn" http://{Your Server And Site}/_layouts/checkin.aspx?Source={Full Url To Library}/Forms/AllItems.aspx&FileName={Doc And Ext}
As I said I don't have curl to test but I got this to work using the Composer tab in Fiddler 2
I'm trying this with curl now and there is an issue getting it to work. Fiddler was executing the request as a POST. If you try to do this as a GET request you will get a 500 error saying that the AllowUnsafeUpdates property of the SPWeb will not allow this request over GET. Sending the request as a POST should correct this.
Edit I am currently going through the checkin.aspx source in the DotPeek decompiler and seeing some additional options for the ActionCheckin parameter that may be relevant such as ActionCheckinPublish and ActionCheckinFromClientPublish. I will update this with any additional findings. The page is located at Microsoft.SharePoint.ApplicationPages.Checkin for anyone else interested.
The above answer by Junx is correct. However, Filename variable is not only the document filename and the extension, but should also include the library name. I was able to get this to work using the following.
Example: http://domain/_layouts/Checkin.aspx?Filename=Shared Documents/filename.txt
My question about Performing multiple requests using cURL has a pretty comprehensive example using bash and cURL, although it suffers from having to reenter the password for each request.