Artifactory not following git-lfs protocol - protocols

I am trying to improve the reliability of git-lfs on Artifactory and I analyzed the protocol between the client and artifactory.
The first part of downloading objects is to send a api command. {download, [{objectID, size},]
the reply
[{OID,size, auth,action[]},{}] the problem is action is a zero length array, the expected "download" information is not there.
I did a trace of a simple lfs clone. I also change the name to protect the guilty.
The problem is "_links" the new protocol uses "actions"
https://github.com/git-lfs/git-lfs/blob/master/docs/api/batch.md
"objects" : [ {
"oid" : "8b4d08d6b3a211d6bb09f636ebdfcdc88ade2f20bd0c011954929b4a65aec07b",
"size" : 189,
"_links" : {
"download" : {
"href" : "https://server.org/artifactory/repo/objects/8b/4d/8b4d08d6b3a211d6bb09f636ebdfcdc88ade2f20bd0c011954929b4a65aec07b",
"header" : {
"Authorization" : "Basic c3lzX21pZ2NpbHg6QVA1NVhjRWExeWhBVGZVRUxoeEpHcGplVktY"
}
}
}
}
######
GIT_TRACE=1 GIT_CURL_VERBOSE=1 git lfs clone git#github.com:svsintel/testlfs.git
12:28:42.951804 git.c:576 trace: exec: git-lfs clone git#github.com:svsintel/testlfs.git
12:28:42.951860 run-command.c:646 trace: run_command: git-lfs clone git#github.com:svsintel/testlfs.git
12:28:42.957381 trace git-lfs: run_command: 'git' version
WARNING: 'git lfs clone' is deprecated and will not be updated
with new flags from 'git clone'
'git clone' has been updated in upstream Git to have comparable
speeds to 'git lfs clone'.
Cloning into 'testlfs'...
X11 forwarding request failed on channel 0
remote: Enumerating objects: 9, done.
remote: Counting objects: 100% (9/9), done.
remote: Compressing objects: 100% (5/5), done.
remote: Total 9 (delta 0), reused 6 (delta 0), pack-reused 0
Receiving objects: 100% (9/9), done.
12:28:43.728998 trace git-lfs: run_command: 'git' config -l -f /home/savages/DEVEL/testlfs/.lfsconfig
12:28:43.730944 trace git-lfs: run_command: 'git' config -l
12:28:43.733209 trace git-lfs: run_command: 'git' -c filter.lfs.smudge= -c filter.lfs.clean= -c filter.lfs.process= -c filter.lfs.required=false rev-parse HEAD --symbolic-full-name HEAD
12:28:43.735365 trace git-lfs: run_command: 'git' -c filter.lfs.smudge= -c filter.lfs.clean= -c filter.lfs.process= -c filter.lfs.required=false rev-parse HEAD --symbolic-full-name HEAD
12:28:43.737654 trace git-lfs: tq: running as batched queue, batch size of 100
12:28:43.738479 trace git-lfs: run_command: git cat-file --batch
12:28:43.741076 trace git-lfs: fetch javaguidelink.png [8b4d08d6b3a211d6bb09f636ebdfcdc88ade2f20bd0c011954929b4a65aec07b]
12:28:43.741198 trace git-lfs: tq: sending batch of size 1
Git LFS: (0 of 1 files) 0 B / 189 B 12:28:43.741587 trace git-lfs: api: batch 1 files
12:28:43.741724 trace git-lfs: HTTP: POST https://server.org/repo/objects/batch
> POST /artifactory/api/lfs/repo/objects/batch HTTP/1.1
> Host: server.org
> Accept: application/vnd.git-lfs+json; charset=utf-8
> Content-Length: 122
> Content-Type: application/vnd.git-lfs+json; charset=utf-8
> User-Agent: git-lfs/2.3.4 (GitHub; linux amd64; go 1.8.3)
>
{"operation":"download","objects":[{"oid":"8b4d08d6b3a211d6bb09f636ebdfcdc88ade2f20bd0c011954929b4a65aec07b","size":189}]}12:28:44.552717 trace git-lfs: HTTP: 401
< HTTP/1.1 401 Unauthorized
< Transfer-Encoding: chunked
< Content-Type: application/json
< Date: Tue, 17 Dec 2019 20:28:44 GMT
< Server: Artifactory/5.3.0
< Set-Cookie: BIGipServerlbauto-af01p-ir-https=!lwwF2ZPxFqh78i9ZXUYuw/jzKcOV9bdWzE0rDvzrv7YDwaHF/aePpE4m4YlV0HLmvlOn3f4in6Ea; path=/; Httponly; Secure
< Www-Authenticate: Basic realm="Artifactory Realm"
< X-Artifactory-Id: a4228f8f67d17308ce2e6929fb0d1c96b8fc4fc0
< X-Artifactory-Node-Id: irvapp049
<
12:28:44.552948 trace git-lfs: HTTP: {
"errors" : [ {
"status" : 401,
"message" : "Authorization Required"
} ]
}
{
"errors" : [ {
"status" : 401,
"message" : "Authorization Required"
} ]
}12:28:44.553053 trace git-lfs: setting repository access to basic
12:28:44.553065 trace git-lfs: run_command: 'git' config --replace-all lfs.https://server.org/artifactory/api/lfs/repo.access basic
12:28:44.555768 trace git-lfs: api: http response indicates "basic" authentication. Resubmitting...
12:28:44.555852 trace git-lfs: creds: git credential fill ("https", "server.org", "artifactory/api/lfs/repo")
12:28:44.562949 trace git-lfs: Filled credentials for https://server.org/artifactory/api/lfs/repo
12:28:44.563100 trace git-lfs: HTTP: POST https://server.org/repo/objects/batch
> POST /artifactory/api/lfs/repo/objects/batch HTTP/1.1
> Host: server.org
> Accept: application/vnd.git-lfs+json; charset=utf-8
> Authorization: Basic * * * * *
> Content-Length: 122
> Content-Type: application/vnd.git-lfs+json; charset=utf-8
> User-Agent: git-lfs/2.3.4 (GitHub; linux amd64; go 1.8.3)
>
{"operation":"download","objects":[{"oid":"8b4d08d6b3a211d6bb09f636ebdfcdc88ade2f20bd0c011954929b4a65aec07b","size":189}]}{"operation":"download","objects":[{"oid":"8b4d08d6b3a211d6bb09f636ebdfcdc88ade2f20bd0c011954929b4a65aec07b","size":189}]}12:28:46.318344 trace git-lfs: HTTP: 200
< HTTP/1.1 200 OK
< Transfer-Encoding: chunked
< Content-Type: application/vnd.git-lfs+json
< Date: Tue, 17 Dec 2019 20:28:46 GMT
< Server: Artifactory/5.3.0
< Set-Cookie: BIGipServerlbauto-af01p-ir-https=!PuMWH2edbtitYCJZXUYuw/jzKcOV9b83tgCkdCyENMF11Shn6y8h8GdLZf7RA08ntnJe+hDmFL6BoA==; path=/; Httponly; Secure
< X-Artifactory-Id: a4228f8f67d17308ce2e6929fb0d1c96b8fc4fc0
< X-Artifactory-Node-Id: irvapp032
<
12:28:46.518402 trace git-lfs: HTTP: {
"objects" : [ {
"oid" : "8b4d08d6b3a211d6bb09f636ebdfcdc88ade2f20bd0c011954929b4a65aec07b",
"size" : 189,
"_links" : {
"download" : {
"href" : "https://server.org/repo/objects/8b/4d/8b4d08d6b3a211d6bb09f636ebdfcdc88ade2f20bd0c011954929b4a65aec07b",
"header" : {
"Authorization" : "Basic c3lzX21pZ2NpbHg6QVA1NVhjRWExeWhBVGZVRUxoeEpHcGplVktY"
}
}
}
} ]
}
{
"objects" : [ {
"oid" : "8b4d08d6b3a211d6bb09f636ebdfcdc88ade2f20bd0c011954929b4a65aec07b",
"size" : 189,
"_links" : {
"download" : {
"href" : "https://server.org/repo/objects/8b/4d/8b4d08d6b3a211d6bb09f636ebdfcdc88ade2f20bd0c011954929b4a65aec07b",
"header" : {
"Authorization" : "Basic c3lzX21pZ2NpbHg6QVA1NVhjRWExeWhBVGZVRUxoeEpHcGplVktY"
}
}
}
} ]
}12:28:46.518635 trace git-lfs: tq: starting transfer adapter "basic"
Git LFS: (0 of 1 files) 0 B / 189 B 12:28:46.519344 trace git-lfs: HTTP: GET https://server.org/artifactory/repo/objects/8b/4d/8b4d08d6b3a211d6bb09f636ebdfcdc88ade2f20bd0c011954929b4a65aec07b
> GET /artifactory/repo/objects/8b/4d/8b4d08d6b3a211d6bb09f636ebdfcdc88ade2f20bd0c011954929b4a65aec07b HTTP/1.1
> Host: server.org
> Authorization: Basic * * * * *
> User-Agent: git-lfs/2.3.4 (GitHub; linux amd64; go 1.8.3)
>
Git LFS: (0 of 1 files) 0 B / 189 B 12:28:47.326791 trace git-lfs: HTTP: 200
< HTTP/1.1 200 OK
< Content-Length: 189
< Accept-Ranges: bytes
< Content-Disposition: attachment; filename="8b4d08d6b3a211d6bb09f636ebdfcdc88ade2f20bd0c011954929b4a65aec07b"; filename*=UTF-8''8b4d08d6b3a211d6bb09f636ebdfcdc88ade2f20bd0c011954929b4a65aec07b
< Content-Type: application/octet-stream
< Date: Tue, 17 Dec 2019 20:28:47 GMT
< Etag: b0b31c0cbccf04819012d82385f3966ba0856d18
< Last-Modified: Fri, 02 Aug 2019 09:42:37 GMT
< Server: Artifactory/5.3.0
< Set-Cookie: BIGipServerlbauto-af01p-ir-https=!kfSG8soGK6fsAIdZXUYuw/jzKcOV9czEPoAj80t8OIgyLdw5+B9YrT0o6uFWA5Jp3HDyK32k00JB; path=/; Httponly; Secure
< X-Artifactory-Filename: 8b4d08d6b3a211d6bb09f636ebdfcdc88ade2f20bd0c011954929b4a65aec07b
< X-Artifactory-Id: a4228f8f67d17308ce2e6929fb0d1c96b8fc4fc0
< X-Artifactory-Node-Id: irvapp049
< X-Checksum-Md5: b36c320fa2845b3c75f95685474843c9
< X-Checksum-Sha1: b0b31c0cbccf04819012d82385f3966ba0856d18
<
Git LFS: (1 of 1 files) 189 B / 189 B
12:28:47.556210 trace git-lfs: Install hook: pre-push, force=false, path=/home/savages/DEVEL/testlfs/.git/hooks/pre-push, upgrading...
12:28:47.556810 trace git-lfs: Install hook: post-checkout, force=false, path=/home/savages/DEVEL/testlfs/.git/hooks/post-checkout, upgrading...
12:28:47.557303 trace git-lfs: Install hook: post-commit, force=false, path=/home/savages/DEVEL/testlfs/.git/hooks/post-commit, upgrading...
12:28:47.558058 trace git-lfs: Install hook: post-merge, force=false, path=/home/savages/DEVEL/testlfs/.git/hooks/post-merge, upgrading...

Artifactory version 3.9 onwards supports gitLFS protocol.
https://www.jfrog.com/confluence/display/JFROG/Git+LFS+Repositories

Related

Connecting React Production build with Express Gateway

Our React Development build runs flawless with Express Gateway setup on localhost. After build React for production and when we run serve -s build login page comes as it is the entry point of the app. It gets 200 ok response when we put sign-in credential. But when we looked into it we can see the request to server was not successful cause token it saves to browser application is undefined and we checked the response, It is "You need to enable javascript...". JS is enabled no doubt. I have checked By using
axios.post('http://localhost:8080/api/v1/auth/sign-in', userData)
It works fine but when setup proxy:
axios.post('/auth/sign-in', userData)
react doesn’t run
Here is the part of yml for express gateway setup:
http:
port: 8080
apiEndpoints:
auth-service:
host: "*"
paths: ["/api/v1/auth/*", "/api/v1/auth"]
mail-service:
host: "*"
paths: ["/api/v1/mail/*", "/api/v1/mail"]
serviceEndpoints:
auth-service-endpoint:
url: http://localhost:3003/
mail-service-endpoint:
url: http://localhost:3005/
policies:
- proxy
pipelines:
auth-service-pipeline:
apiEndpoints:
- auth-service
policies:
- proxy:
action:
serviceEndpoint: auth-service-endpoint
changeOrigin: true
stripPath: true
mail-service-pipeline:
apiEndpoints:
- mail-service
policies:
- proxy:
action:
serviceEndpoint: mail-service-endpoint
changeOrigin: true
stripPath: true
I put the setupProxy.js on src directory of React:
const { createProxyMiddleware } = require('http-proxy-middleware');
module.exports = function(app) {
app.use(createProxyMiddleware('/api/v1',
{ target: 'http://localhost:8080',
secure: false,
changeOrigin: true,
// pathRewrite: {
// "^/api": "/api/v1",
// }
}
));
}
Currently everything is on same machine. We are not using docker.
The application runs on Dev environment but shows 200 ok response in production build
Any help will be appreciated.
[Edit]
krypton:admin-dashboard-server hasan$ curl -v http://localhost:3001/find_all_services/1/10
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 3001 (#0)
> GET /find_all_services/1/10 HTTP/1.1
> Host: localhost:3001
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Access-Control-Allow-Origin: *
< X-DNS-Prefetch-Control: off
< X-Frame-Options: SAMEORIGIN
< Strict-Transport-Security: max-age=15552000; includeSubDomains
< X-Download-Options: noopen
< X-Content-Type-Options: nosniff
< X-XSS-Protection: 1; mode=block
< Content-Type: application/json; charset=utf-8
< Content-Length: 1833
< ETag: W/"729-LM91B3vCUrbvesBrp32ykiXXkQo"
< Date: Tue, 12 Jan 2021 14:57:24 GMT
< Connection: keep-alive
<
* Connection #0 to host localhost left intact
[{"id":1,"name":"Laser Hair Remove"},
{"id":2,"name":"Facial Treatments"}
]
krypton:admin-dashboard-server hasan$ curl -v
http://localhost:8080/api/v1/services/find_all_services/1/10
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8080 (#0)
> GET /api/v1/services/find_all_services/1/10 HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< access-control-allow-origin: *
< x-dns-prefetch-control: off
< x-frame-options: SAMEORIGIN
< strict-transport-security: max-age=15552000; includeSubDomains
< x-download-options: noopen
< x-content-type-options: nosniff
< x-xss-protection: 1; mode=block
< content-type: application/json; charset=utf-8
< content-length: 1833
< etag: W/"729-LM91B3vCUrbvesBrp32ykiXXkQo"
< date: Tue, 12 Jan 2021 15:03:45 GMT
< connection: keep-alive
<
* Connection #0 to host localhost left intact
[{"id":1,"name":"Laser Hair Remove"},
{"id":2,"name":"Facial Treatments"}
]
krypton:admin-dashboard-server hasan$ curl -v -H "Content-Type: application/json" -X POST -d
'{"email":"mh.mithun#gmail.com","password":"safe123"}'
http://localhost:8080/api/v1/auth/sign-in
Note: Unnecessary use of -X or --request, POST is already inferred.
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8080 (#0)
> POST /api/v1/auth/sign-in HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.54.0
> Accept: */*
> Content-Type: application/json
> Content-Length: 52
>
* upload completely sent off: 52 out of 52 bytes
< HTTP/1.1 200 OK
< access-control-allow-origin: *
< x-dns-prefetch-control: off
< x-frame-options: SAMEORIGIN
< strict-transport-security: max-age=15552000; includeSubDomains
< x-download-options: noopen
< x-content-type-options: nosniff
< x-xss-protection: 1; mode=block
< content-type: application/json; charset=utf-8
< content-length: 270
< etag: W/"10e-S+kd8b4Yfl7un04FVGe3MFLFEaY"
< date: Tue, 12 Jan 2021 15:40:12 GMT
< connection: keep-alive
<
* Connection #0 to host localhost left intact
"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhbGdvcml0aG0iOiJIUzI1N"
krypton:admin-dashboard-server hasan$ curl -v -H "Content-Type: application/json" -X POST -d '{"email":"mh.mithun#gmail.com","password":"safe123"}' http://localhost:3003/sign-in
Note: Unnecessary use of -X or --request, POST is already inferred.
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 3003 (#0)
> POST /sign-in HTTP/1.1
> Host: localhost:3003
> User-Agent: curl/7.54.0
> Accept: */*
> Content-Type: application/json
> Content-Length: 52
>
* upload completely sent off: 52 out of 52 bytes
< HTTP/1.1 200 OK
< Access-Control-Allow-Origin: *
< X-DNS-Prefetch-Control: off
< X-Frame-Options: SAMEORIGIN
< Strict-Transport-Security: max-age=15552000; includeSubDomains
< X-Download-Options: noopen
< X-Content-Type-Options: nosniff
< X-XSS-Protection: 1; mode=block
< Content-Type: application/json; charset=utf-8
< Content-Length: 270
< ETag: W/"10e-LW/1l5fXf5BaiF3KJMvG60xRthE"
< Date: Tue, 12 Jan 2021 15:45:33 GMT
< Connection: keep-alive
<
* Connection #0 to host localhost left intact
"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhbGdvcml0aG0i"

CouchDB can't get cookie auth session for nonadmin user

For admin user:
$ curl -X POST localhost:5984/_session -d "username=admin&password=admin"
{"ok":true,"name":"admin","roles":["_admin"]}
$ curl -vX GET localhost:5984/_session --cookie AuthSession=YWRtaW...
{"ok":true,"userCtx":{"name":"admin","roles":["_admin"]},"info":{"authentication_db":"_users","authentication_handlers":["cookie","default"],"authenticated":"cookie"}}
but for regular user:
$ curl -vX POST localhost:5984/_session -d "username=user&password=123"
{"ok":true,"name":"user","roles":["users"]}
$ curl -vX GET localhost:5984/_session --cookie AuthSession=ZGlqbzo...
{"ok":true,"userCtx":{"name":null,"roles":[]},"info":{"authentication_db":"_users","authentication_handlers":["cookie","default"]}}
The same thing happens when im doing XmlHttpRequest via iron-ajax element, or simply from chrome. What am I doing wrong?
CouchDB version: 2.1.1
Config:
[chttpd]
bind_address = 0.0.0.0
port = 5984
authentication_handlers = {couch_httpd_auth, cookie_authentication_handler}, {couch_httpd_auth, default_authentication_handler}
[httpd]
enable_cors = true
[couch_httpd_auth]
allow_persistent_cookies = true
timeout = 60000
[cors]
credentials = true
origins = *
headers = accept, authorization, content-type, origin, referer
methods = GET, PUT, POST, HEAD, DELETE
I didn't quite get your problem, but here is what I do with curl to authenticate with cookie as a nonadmin user:
First I run curl with -v option to see the header fields:
$ curl -k -v -X POST https://192.168.1.106:6984/_session -d 'username=jan&password=****'
Note: Unnecessary use of -X or --request, POST is already inferred.
* Trying 192.168.1.106...
* Connected to 192.168.1.106 (192.168.1.106) port 6984 (#0)
* found 148 certificates in /etc/ssl/certs/ca-certificates.crt
* found 604 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
* SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256
* server certificate verification SKIPPED
* server certificate status verification SKIPPED
* error fetching CN from cert:The requested data were not available.
* common name: (does not match '192.168.1.106')
* server certificate expiration date OK
* server certificate activation date OK
* certificate public key: RSA
* certificate version: #3
* subject: O=Tech Studio
* start date: Sat, 31 Mar 2018 04:37:51 GMT
* expire date: Tue, 30 Mar 2021 04:37:51 GMT
* issuer: O=Tech Studio
* compression: NULL
* ALPN, server did not agree to a protocol
> POST /_session HTTP/1.1
> Host: 192.168.1.106:6984
> User-Agent: curl/7.47.0
> Accept: */*
> Content-Length: 25
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 25 out of 25 bytes
< HTTP/1.1 200 OK
< Set-Cookie: AuthSession=amFuOjVBRTk3MENGOuKAb68qYzf5jJ7bIOq72Jlfw-Qb; Version=1; Secure; Path=/; HttpOnly
< Server: CouchDB/2.1.1 (Erlang OTP/18)
< Date: Wed, 02 May 2018 08:03:27 GMT
< Content-Type: application/json
< Content-Length: 44
< Cache-Control: must-revalidate
<
{"ok":true,"name":"jan","roles":["sample"]}
* Connection #0 to host 192.168.1.106 left intact
I see in the above header fields the cookie:
Set-Cookie: AuthSession=amFuOjVBRTk3MENGOuKAb68qYzf5jJ7bIOq72Jlfw-Qb; Version=1; Secure; Path=/; HttpOnly
I use the above cookie to authenticate as a nonadmin user and get the user info for the same nonadmin user like this:
$ curl -k -X GET https://192.168.1.106:6984/_users/org.couchdb.user:jan -H 'Cookie: AuthSession=amFuOjVBRTk3MENGOuKAb68qYzf5jJ7bIOq72Jlfw-Qb'
{"_id":"org.couchdb.user:jan","_rev":"3-f11b227a6e1236fa502af668fdbf326d","name":"jan","roles":["sample"],"type":"user","password_scheme":"pbkdf2","iterations":10,"derived_key":"a973123ebd9dbc2a543d477a506268b018e7aab4","salt":"0ef2111a894062b08ffd723fd34b6b75"}
Problem gone when i removed from my local.ini
authentication_handlers = {couch_httpd_auth, cookie_authentication_handler}, {couch_httpd_auth, default_authentication_handler}
Because i used incorrect handler: couch_httpd_auth in the config for chttpd, when that handler is only written to work with the original couch_httpd module

Use express sendFile for HEAD requests

sendFile is for sending files and it also figures out some interesting headers from the file (like content length). For a HEAD request I would ideally want the exact same headers but just skip the body.
There doesn't seem to be an option for this in the API. Maybe I can override something in the response object to stop it from sending anything?
Here's what I got:
res.sendFile(file, { headers: hdrs, lastModified: false, etag: false })
Has anyone solved this?
As Robert Klep has already written, the sendFile already has the required behavior of sending the headers and not sending the body if the request method is HEAD.
In addition to that, Express already handles HEAD requests for routes that have GET handlers defined. So you don't even need to define any HEAD handler explicitly.
Example:
let app = require('express')();
let file = __filename;
let hdrs = {'X-Custom-Header': '123'};
app.get('/file', (req, res) => {
res.sendFile(file, { headers: hdrs, lastModified: false, etag: false });
});
app.listen(3322, () => console.log('Listening on 3322'));
This sends its own source code on GET /file as can be demonstrated with:
$ curl -v -X GET localhost:3322/file
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 3322 (#0)
> GET /file HTTP/1.1
> User-Agent: curl/7.35.0
> Host: localhost:3322
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Powered-By: Express
< X-Custom-Header: 123
< Accept-Ranges: bytes
< Cache-Control: public, max-age=0
< Content-Type: application/javascript
< Content-Length: 267
< Date: Tue, 11 Apr 2017 10:45:36 GMT
< Connection: keep-alive
<
[...]
The [...] is the body that was not included here.
Without adding any new handler this will also work:
$ curl -v -X HEAD localhost:3322/file
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 3322 (#0)
> HEAD /file HTTP/1.1
> User-Agent: curl/7.35.0
> Host: localhost:3322
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Powered-By: Express
< X-Custom-Header: 123
< Accept-Ranges: bytes
< Cache-Control: public, max-age=0
< Content-Type: application/javascript
< Content-Length: 267
< Date: Tue, 11 Apr 2017 10:46:29 GMT
< Connection: keep-alive
<
This is the same but with no body.
Express uses send to implement sendFile, which already does exactly what you want.

Difference between curl expressions

I have an API server running at localhost:3000 and I am trying to query it using these two expressions:
[wani#lenovo ilparser-docker]$ time (curl "localhost:3000/parse?lang=hin&data=देश" )
{"tokenizer":"<Sentence id=\"1\">\n1\tदेश\tunk\n<\/Sentence>\n"}
real 0m0.023s
user 0m0.009s
sys 0m0.004s
[wani#lenovo ilparser-docker]$ time (curl -XGET localhost:3000/parse -F lang=hin -F data="देश" )
{"tokenizer":"<Sentence id=\"1\">\n1\tदेश\tunk\n<\/Sentence>\n"}
real 0m1.101s
user 0m0.020s
sys 0m0.070s
Why does the second expression take so much more time?
With more verbosity:
[wani#lenovo ilparser-docker]$ time curl -v localhost:3000/parse -F lang=hin -F data="देश"
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 3000 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
> POST /parse HTTP/1.1
> Host: localhost:3000
> User-Agent: curl/7.43.0
> Accept: */*
> Content-Length: 244
> Expect: 100-continue
> Content-Type: multipart/form-data; boundary=------------------------1eb5e5991b976cb1
>
* Done waiting for 100-continue
< HTTP/1.1 200 OK
< Content-Length: 70
< Server: Mojolicious (Perl)
< Content-Type: application/json;charset=UTF-8
< Date: Mon, 21 Mar 2016 11:06:09 GMT
< Connection: keep-alive
<
* Connection #0 to host localhost left intact
{"tokenizer":"<Sentence id=\"1\">\n1\tदेश\tunk\n<\/Sentence>\n"}
real 0m1.106s
user 0m0.027s
sys 0m0.068s
[wani#lenovo ilparser-docker]$ time curl -v localhost:3000/parse --data lang=hin --data data="देश"
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 3000 (#0)
> POST /parse HTTP/1.1
> Host: localhost:3000
> User-Agent: curl/7.43.0
> Accept: */*
> Content-Length: 23
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 23 out of 23 bytes
< HTTP/1.1 200 OK
< Server: Mojolicious (Perl)
< Content-Length: 70
< Connection: keep-alive
< Date: Mon, 21 Mar 2016 11:06:24 GMT
< Content-Type: application/json;charset=UTF-8
<
* Connection #0 to host localhost left intact
{"tokenizer":"<Sentence id=\"1\">\n1\tदेश\tunk\n<\/Sentence>\n"}
real 0m0.031s
user 0m0.011s
sys 0m0.003s
Expect: 100-continue sounded fishy, so I cleared that header:
[wani#lenovo ilparser-docker]$ time curl -v -F lang=hin -F data="देश" "localhost:3000/parse" -H Expect: --trace-time
16:48:04.513691 * Trying 127.0.0.1...
16:48:04.513933 * Connected to localhost (127.0.0.1) port 3000 (#0)
16:48:04.514083 * Initializing NSS with certpath: sql:/etc/pki/nssdb
16:48:04.610095 > POST /parse HTTP/1.1
16:48:04.610095 > Host: localhost:3000
16:48:04.610095 > User-Agent: curl/7.43.0
16:48:04.610095 > Accept: */*
16:48:04.610095 > Content-Length: 244
16:48:04.610095 > Content-Type: multipart/form-data; boundary=------------------------24f30647b16ba82d
16:48:04.610095 >
16:48:04.618107 < HTTP/1.1 200 OK
16:48:04.618194 < Content-Length: 70
16:48:04.618249 < Server: Mojolicious (Perl)
16:48:04.618306 < Content-Type: application/json;charset=UTF-8
16:48:04.618370 < Date: Mon, 21 Mar 2016 11:18:04 GMT
16:48:04.618430 < Connection: keep-alive
16:48:04.618492 <
16:48:04.618590 * Connection #0 to host localhost left intact
{"tokenizer":"<Sentence id=\"1\">\n1\tदेश\tunk\n<\/Sentence>\n"}
real 0m0.117s
user 0m0.023s
sys 0m0.082s
Now the only time taking thing left is: Initializing NSS with certpath: sql:/etc/pki/nssdb. Why does curl do that in this context?
After a little help on IRC from #DanielStenberg, I came to know that the db load is present because curl inits nss in that case since curl needs a good random source for the boundary separator used for -F . Curl could have used getrandom() syscall or read bits out of /dev/urandom since boundary separators don't need to be cryptographically secure in any way, but curl just wants secure random in some other places so curl reuses the random function that it already has.

Getting conditionNotMet error on migration of emails > 32kb in size

I've had success migrating small test messages with the Google Email Migration API v2. However, when migrating larger messages, I get an error like:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "conditionNotMet",
"message": "Limit reached.",
"locationType": "header",
"location": "If-Match"
}
],
"code": 412,
"message": "Limit reached."
}
}
I start noticing the error sporadically with messages at 32kb size. At about 40kb in size, the error becomes consistent (no messages succeed). I've confirmed the error occurs whether I'm using google-api-python-client with my non-standard discovery document or the OAuth 2.0 playground. Here's what a successful call and response for a message < 32kb looks like:
POST /upload/email/v2/users/jay#ditoweb.com/mail?uploadType=multipart HTTP/1.1
Host: www.googleapis.com
Content-length: 6114
Content-type: multipart/related; boundary="part_boundary"
Authorization: Bearer <removed>
--part_boundary
Content-Type: application/json; charset=UTF-8
{
'isInbox': 'true',
'isUnread': 'true'
}
--part_boundary
Content-Type: message/rfc822
From: <admin#jay.powerposters.org>
To: <admin#jay.powerposters.org>
Subject: test growing message sizes
Date: Wed, 17 Jul 2013 10:40:48 -0400
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
<last line repeated ~50 times>
--part_boundary--
HTTP/1.1 204 No Content
Content-length: 0
Via: HTTP/1.1 GWA
X-google-cache-control: remote-fetch
Server: HTTP Upload Server Built on Jul 8 2013 15:32:26 (1373322746)
Etag: "S82oyh6kQMvIt9YE14Ogc8RmmsQ/vyGp6PvFo4RvsFtPoIWeCReyIC8"
Date: Wed, 17 Jul 2013 17:35:13 GMT
and here's what a failed message of ~150kb looks like:
POST /upload/email/v2/users/admin#jay.powerposters.org/mail?uploadType=multipart HTTP/1.1
Host: www.googleapis.com
Content-length: 189946
Content-type: multipart/related; boundary="part_boundary"
Authorization: Bearer <removed>
--part_boundary
Content-Type: application/json; charset=UTF-8
{
'isInbox': 'true',
'isUnread': 'true'
}
--part_boundary
Content-Type: message/rfc822
From: <admin#jay.powerposters.org>
To: <admin#jay.powerposters.org>
Subject: test growing message sizes
Date: Wed, 17 Jul 2013 10:40:48 -0400
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
<last line repeated ~1500 times>
--part_boundary--
HTTP/1.1 412 Precondition Failed
Content-length: 240
Via: HTTP/1.1 GWA
X-google-cache-control: remote-fetch
Server: HTTP Upload Server Built on Jul 8 2013 15:32:26 (1373322746)
Date: Wed, 17 Jul 2013 16:57:23 GMT
Content-type: application/json
{
"error": {
"errors": [
{
"domain": "global",
"reason": "conditionNotMet",
"message": "Limit reached.",
"locationType": "header",
"location": "If-Match"
}
],
"code": 412,
"message": "Limit reached."
}
}
Google has fixed the issue on their end. I can now migrate messages of all sizes.

Resources