Change issue closing pattern for gitlab user account - gitlab

I want to close an issue matching the file name pushed with Issue title (My source files are named with unique integers, e.g. 34521.cpp and there are corresponding issues on Gitlab e.g. Problem #34521).
How can I do so?
The default pattern is not suitable as I have 2000+ issues and I do not want to refer issues with the issue ID's each time. I want it to be automated. So I was checking the page :
Change the issue closing pattern.
It says I need to have access to the server where gitlab is installed. Does that mean I cannot change the issue closing pattern for Gitlab cloud's user account hosted at http://gitlab.com ?

You can't define a custom closing pattern on gitlab.com, only on your own hosted gitlab instance. But what you can do is to use webhooks to listen on push events on a remote server. You can then parse the commit messages yourself and take decision on closing issues. You can use Gitlab API to close issue on your server instance (with a hard coded access token)
This can be tested locally using an http tunnel like ngrok
The following nodejs script starts a server serving a /webhook endpoint. This webhook endpoint is called when any push occurs on your repo.
const express = require('express');
const bodyParser = require('body-parser');
const axios = require('axios');
const to = require('await-to-js').to;
const port = 3000;
const projectId = "4316159";
const accessToken = "YOUR_ACCESS_TOKEN";
const app = express();
app.use(bodyParser.json())
app.post('/webhook', async function(req, res) {
console.log("received push event");
let result, err, closeRes;
for (var i = 0; i < req.body.commits.length; i++) {
for (var j = 0; j < req.body.commits[i].added.length; j++) {
filenameWithoutExt = req.body.commits[i].added[j].split('.').slice(0, -1).join('.');
[err, result] = await to(axios({
url: `https://gitlab.com/api/v4/projects/${projectId}/issues?search=#${filenameWithoutExt}`,
method: 'GET',
headers: {
"PRIVATE-TOKEN": accessToken
}
}));
if (err) {
console.log(err);
} else {
if (result.data.length !== 0) {
//close those issues
for (item in result.data) {
console.log(`closing issue #${result.data[item].iid} with title ${result.data[item].title}`);
[err, closeRes] = await to(axios({
url: `https://gitlab.com/api/v4/projects/${projectId}/issues/${result.data[item].iid}?state_event=close`,
method: 'PUT',
headers: {
"PRIVATE-TOKEN": accessToken
}
}));
if (err) {
console.log(err);
} else {
console.log(`closing status : ${closeRes.status}`);
}
}
} else {
console.log("no issue were found");
}
}
}
}
res.sendStatus(200);
});
app.listen(port, () => console.log(`listening on port ${port}!`))
In the above you need to change the access token value & projectId. Also note that above code will check only added file, you can modify it to include updated or deleted file matching your requirements.
Launch ngrok on port 3000 ngrok http 3000 & copy the given url in integrations sections of your repo :
Now when you add any file it will check for the filename without extension and search all issue with within title #filename_without_extension and close it right away

Related

empty response body in HTTP response in github actions

I'm trying to create a github action which requires sending an http request to https://www.instagram.com/<username>/?__a=1.
When I'm running it locally, it runs perfectly fine and gives me the number of followers.
But when I use it in github actions, it isn't able to parse the JSON string as the response is null
Here is a link to the github action file https://github.com/ashawe/actions-check/blob/e80ca115544979cdb3180207b99c7724e4446849/index.js
Here is the code to get the followers ( starts at line #94 )
promiseArray.push(new Promise((resolve, reject) => {
const url = 'https://www.instagram.com/' + INSTAGRAM_USERNAME + '/?__a=1';
core.info("url is");
core.info(url);
http.get(url, (response) => {
let chunks_of_data = [];
response.on('data', (fragments) => {
chunks_of_data.push(fragments);
});
response.on('end', () => {
let response_body = Buffer.concat(chunks_of_data);
core.info(response_body.toString());
let responseJSON = JSON.parse(response_body.toString());
resolve((responseJSON.graphql.user.edge_followed_by.count).toString());
});
response.on('error', (error) => {
reject(error);
});
});
}));
and then I'm processing it like:
Promise.allSettled(promiseArray).then((results) => {
results.forEach((result, index) => {
if (result.status === 'fulfilled') {
// Succeeded
// core.info(runnerNameArray[index] + ' runner succeeded. Post count: ' + result.value.length);
// postsArray.push(result.value);
instagram_followers = result.value;
} else {
jobFailFlag = true;
// Rejected
//core.error(runnerNameArray[index] + ' runner failed, please verify the configuration. Error:');
core.error(result.reason);
}
});
}).finally(() => {
try {
const followers = instagram_followers;
const readmeData = fs.readFileSync(README_FILE_PATH, 'utf8');
// core.info(readmeData);
const shieldURL = "https://img.shields.io/badge/ %40 " + INSTAGRAM_USERNAME + "-" + followers + "-%23E4405F?style=for-the-badge&logo=instagram";
const instagramBadge = "<img align='left' alt='instagram-followers' src='" + shieldURL + "' />";
const newReadme = buildReadme(readmeData, instagramBadge);
// core.info(newReadme);
// if there's change in readme file update it
if (newReadme !== readmeData) {
core.info('Writing to ' + README_FILE_PATH);
fs.writeFileSync(README_FILE_PATH, newReadme);
if (!process.env.TEST_MODE) {
// noinspection JSIgnoredPromiseFromCall
commitReadme();
}
} else {
core.info('No change detected, skipping');
process.exit(0);
}
} catch (e) {
core.error(e);
process.exit(1);
}
});
But when I run the action, it gives this error:
which means that the response_body isn't complete JSON response but a request to https://www.instagram.com/USERNAME/?__a=1 does send a json response.
UPDATE
Basically every time you hit that endpoint it returns the login html page, which causes the json parse to fail. It appears that you may need to use the api which requires you to authenticate before getting info from users. Or figure out other scraping methodologies.
I was able to recreate this failure in my local pc by jumping into a vpn and private browser. When I hit the endpoint it took me to the login screen. And when i hit the endpoint through curl in terminal it returned nothing. But when i got off the vpn, all worked fine. I think the reason it worked in your local is because there's some caching happening in the browser and you're probs not in a vpn. I am thinking there's some network blacklisting happening when on vpn. I don't know the github hosted network so I would recommend opening a ticket with them if you want to learn more about that.
Here are the instagram api docs for quick reference
https://developers.facebook.com/docs/instagram-basic-display-api/getting-started
Previews Response: Leaving here for other users future reference.
You are not passing username so it's trying to query the endpoint with empty username
Instead of running just node index.js in your action, you need to call your action and provide it with the parameters that it needs
- name: Your github action
uses: ./ # Uses an action in the root directory
with:
username: '_meroware'
Then your code will pick it put properly
const INSTAGRAM_USERNAME = core.getInput('username');
const url = 'https://www.instagram.com/' + INSTAGRAM_USERNAME + '/?__a=1';
Resources:
https://docs.github.com/en/actions/creating-actions/creating-a-javascript-action

Trying to send proper header with Express, Tedious, while accessing Azure SQL

My backend gets a request to get records from an Azure SQL db. To manage this requests I'm using Express in Nodejs, and Tedious (to connect to DB). When the request to the appropriate route comes in, Tedious opens the connection with db, queries it, and it should send the response back to frontend.
However, the code responds before I have an answer with from the db, and thus when I go to send the real (the actually desired) response, Express tells me it already sent headers back (the dreaded: 'Cannot set headers after they are sent to the client').
After debugging quite a bit (using several console.log(JSON.stringify(resp.headersSent)); ) to see when was the response actually sent, I noticed that it's sent the moment I connect with Azure (see below).
I'm not sure if I'm missing something (though I already checked the documentation for all those programs quite a bit), but how can I control when the response is sent? Or, is there another way of doing this.
I omitted several of the other routes for brevity. Other routes work fine and thus I know code connects well to Azure db, and frontend does query backend correctly. Help is appreciated. Thank you.
const express = require('express');
const cors = require('cors');
const Connection = require('tedious').Connection;
const Request = require('tedious').Request;
const config = {
authentication: {
options: {
userName: "xxxx",
password: "xxxx"
},
type: 'default'
},
server: "xxxx",
options: {
database: "xxxx",
encrypt: true
}
};
const app = express();
app.use(express.json({type: '*/*'}));
app.use(cors({ origin: '*' }));
app.get("/allproj/", function (req, resp) {
const q = `select Title, Report_Date, Project_Number, Phase_Code, Items_No, PId from projec order by PId desc`;
let ansf = [];
const connection = new Connection(config);
connection.on('connect', (err, connection) => {
if (err) {
console.log(err);
} else { //this is the moment the headers are sent,
//seemingly with positive response from connection
queryItems(q);
}
});
queryItems = (q) => {
request = new Request(q, function (err, rowCount) {
if (err) {
console.log(err);
} else {
console.log(rowCount + ' rows pulled');
connection.close();
}
});
request.on('row', function(columns) {
let ans = [];
columns.forEach(function(column) {
ans.push(column.value);
if (ans.length === 6) { // I know each row is 6 cols long
ansf.push(ans);
ans = [];
}
});
console.log('ansf length: ' + ansf.length);
resp.send({ ansf }); // This is the response I would like to return
});
request.on('done', function(rowCount) {
console.log(rowCount + ' rows returned');
connection.close();
});
connection.execSql(request);
};
resp.redirect("/");
});
app.listen(3000, process.env.IP, function() {
console.log("Started OK...");
});
Remove resp.redirect("/");
As it is already transferring your request to "/" and when control come at resp.send({ansf}), It gives you error.

This is a general expressjs running on node.js inside a docker container and on the cloud question

I have built two docker images. One with nginx that serves my angular web app and another with node.js that serves a basic express app. I have tried to access the express app from my browser in two different tabs at the same time.
In one tab the angular dev server (ng serve) serves up the web page. In the other tab the docker nginx container serves up the web page.
While accessing the node.js express app at the same time from both tabs the data starts to mix and mingle and the results returned to both tabs are a mix mash of the two requests (one from each browser tab)...
I'll try and make this more simple by showing my express app code here...but to answer this question you may not even need to know what the code is at all...so maybe check the question as stated below the code first.
'use strict';
/***********************************
GOOGLE GMAIL AND OAUTH SETUP
***********************************/
const fs = require('fs');
const {google} = require('googleapis');
const gmail = google.gmail('v1');
const clientSecretJson = JSON.parse(fs.readFileSync('./client_secret.json'));
const oauth2Client = new google.auth.OAuth2(
clientSecretJson.web.client_id,
clientSecretJson.web.client_secret,
'https://us-central1-labelorganizer.cloudfunctions.net/oauth2callback'
);
/***********************************
EXPRESS WITH CORS SETUP
***********************************/
const PORT = 8000;
const HOST = '0.0.0.0';
const express = require('express');
const cors = require('cors');
const cookieParser = require('cookie-parser');
const bodyParser = require('body-parser');
const whiteList = [
'http://localhost:4200',
'http://localhost:80',
'http://localhost',
];
const googleApi = express();
googleApi.use(
cors({
origin: whiteList
}),
cookieParser(),
bodyParser()
);
function getPageOfThreads(pageToken, userId, labelIds) {
return new Promise((resolve, reject) => {
gmail.users.threads.list(
{
'auth': oauth2Client,
'userId': userId,
'labelIds': labelIds,
'pageToken': pageToken
},
(error, response) => {
if (error) {
console.error(error);
reject(error);
}
resolve(response.data);
}
)
});
}
async function getPages(nextPageToken, userId, labelIds, result) {
while (nextPageToken) {
let pageOfThreads = await getPageOfThreads(nextPageToken, userId, labelIds);
console.log(pageOfThreads.nextPageToken);
pageOfThreads.threads.forEach((thread) => {
result = result.concat(thread.id);
})
nextPageToken = pageOfThreads.nextPageToken;
}
return result;
}
googleApi.post('/threads', (req, res) => {
console.log(req.body);
let threadIds = [];
oauth2Client.credentials = req.body.token;
let getAllThreadIds = new Promise((resolve, reject) => {
gmail.users.threads.list(
{ 'auth': oauth2Client, 'userId': 'me', 'maxResults': 500 },
(err, response) => {
if (err) {
console.error(err)
reject(err);
}
if (response.data.threads) {
response.data.threads.forEach((thread) => {
threadIds = threadIds.concat(thread.id);
});
}
if (response.data.nextPageToken) {
getPages(response.data.nextPageToken, 'me', ['INBOX'], threadIds).then(result => {
resolve(result);
}).catch((err) => {
console.error(err);
reject(err);
});
} else {
resolve(threadIds);
}
}
);
});
getAllThreadIds
.then((result) => {
res.send({ threadIds: result });
})
.catch((error) => {
res.status(500).send({ error: 'Request failed with error: ' + error })
});
});
googleApi.get('/', (req, res) => res.send('Hello World!'))
googleApi.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);
The angular app makes a simple request to the express app and waits for the reply...which it properly receives...but when I try to make two requests at the exact same time data starts to get mixed together and results are given back to each browser tab from different accounts...
...and the question is... When running containers in the cloud is this kind of thing an issue? Does one need to spin up a new container for each client that wants to actively connect to the express service so that their data doesn't get mixed?
...or is this an issue I am seeing because the express app is being accessed from locally inside my machine? If two machines with two different ip address tried to access this express server at the same time would this sort of data mixing still be an issue or would each get back it's own set of results?
Is this why people use CaaS instead of IaaS solutions?
FYI: this is demo code and the data will not be actually going back to the consumer directly...plans are to have it placed into a database and then re-extracted from the database to download all of the metadata headers for each email.
-Thank you for your time
I can only clear up a small part of this question:
When running containers in the cloud is this kind of thing an issue?
No. Docker is not causing any of the quirky behaviour that you are describing.
Does one need to spin up a new container for each client?
A docker container generally can serve as much users as the application inside of it can. So as long as your application can handle a lot of users (and it should), you don't have to start the same application in multiple containers. That said, when you expect a very large number of customers, there exist docker tools like Docker Compose, Docker Swarm and a lot of alternatives that will enable you to scale up later. For now, you don't need to worry about this at all.
I think I may have found out the issue with my code...and this is actually very important if you are using the node.js googleapis client library...
It is entirely necessary to create a new oauth2Client for each request that comes in
const oauth2Client = new google.auth.OAuth2(
clientSecretJson.web.client_id,
clientSecretJson.web.client_secret,
'https://us-central1-labelorganizer.cloudfunctions.net/oauth2callback'
);
Problem:
When this oauth2Client is shared it is shared by each and every person that connects at the same time...So it is necessary to create a new one each and every time a user connects to my /threads endpoint so that they do not share the same memory space (i.e. access_token etc.) while the processing is done.
Setting the client secret etc. and creating the oauth2Client just once at the top and then simply resetting the token for each request leads to the conflicts mentioned above.
Solution:
For now simply moving the creation of this oauth2Client into each and every request that comes in makes this work properly.
Each client that connects to the service NEEDS to have their own newly created oauth2Client instance or these types of conflicts will occur...
...it's kind of a no brainer but I still find it odd that there is nothing about this in the docs? and their own examples (https://github.com/googleapis/google-api-nodejs-client) seem to show only one instance being created for the whole of their app...but those examples are snippets so...

HTTP2 push for Express

I'm trying to set up HTTP2 for an Express app I've built. As I understand, Express does not support the NPM http2 module, so I'm using SPDY. Here's how I'm thinking to go about it-I'd appreciate advice from people who've implemented something similar.
1) Server setup-I want to wrap my existing app with SPDY, to keep existing routes. Options are just an object with a key and a cert for SSL.
const app = express();
...all existing Express stuff, followed by:
spdy
.createServer(options, app)
.listen(CONFIG.port, (error) => {
if (error) {
console.error(error);
return process.exit(1)
} else {
console.log('Listening on port: ' + port + '.')
}
});
2) At this point, I want to enhance some of my existing routes with a conditional PUSH response. I want to check to see if there are any updates for the client making a request to the route (the client is called an endpoint, and the updates are an array of JSON objects called endpoint changes,) and if so, push to the client.
My idea is that I will write a function which takes res as one of its parameters, save the endpoint changes as a file (I haven't found a way to push non-file data,) and then add them to a push stream, then delete the file. Is this the right approach? I also notice that there is a second parameter that the stream takes, which is a req/res object-am I formatting it properly here?
const checkUpdates = async (obj, res) => {
if(res.push){
const endpointChanges = await updateEndpoint(obj).endpointChanges;
if (endpointChanges) {
const changePath = `../../cache/endpoint-updates${new Date().toISOString()}.json`;
const savedChanges = await jsonfile(changePath, endpointChanges);
if (savedChanges) {
let stream = res.push(changePath, {req: {'accept': '**/*'}, res: {'content-type': 'application/json'}});
stream.on('error', function (err) {
console.log(err);
});
stream.end();
res.end();
fs.unlinkSync(changePath);
}
}
}
};
3) Then, within my routes, I want to call the checkUpdates method with the relevant parameters, like this:
router.get('/somePath', async (req, res) => {
await checkUpdates({someInfo}, res);
ReS(res, {
message: 'keepalive succeeded'
}, 200);
}
);
Is this the right way to implement HTTP2?

Server Push with Nodejs pushStream method is not working

I am studying http2 on nodejs, but find out a issue pushStream method not working
(client side do not show "Pushed/[fileName]" on developer tool)
I wonder if the reason is nodejs version (I installed the latest version v9.8.0)
My codes is the following :
server.js
'use strict'
const fs = require('fs');
const path = require('path');
const http2 = require('http2');
const utils = require('./utils');
const { HTTP2_HEADER_PATH } = http2.constants;
const PORT = process.env.PORT || 3000;
// The files are pushed to stream here
function push(stream, path) {
const file = utils.getFile(path);
if (!file) {
return;
}
stream.pushStream({ [HTTP2_HEADER_PATH]: path}, (err, pushStream, headers) => {
if (err) throw err;
pushStream.respondWithFD(file.content, file.headers)
});
}
// Request handler
function onRequest(req, res) {
const reqPath = req.headers[':path'] === '/' ? '/index.html' : req.headers[':path']
const file = utils.getFile(reqPath);
// 404 - File not found
if (!file) {
res.statusCode = 404;
res.end();
return;
}
// Push with index.html
if (reqPath === '/index.html') {
push(res.stream, '/assets/main.js');
push(res.stream, '/assets/style.css');
} else {
console.log("requiring non index.html")
}
// Serve file
res.stream.respondWithFD(file.content, file.headers);
}
// creating an http2 server
const server = http2.createSecureServer({
cert: fs.readFileSync(path.join(__dirname, '/certificate.crt')),
key: fs.readFileSync(path.join(__dirname, '/privateKey.key'))
}, onRequest);
// start listening
server.listen(PORT, (err) => {
if (err) {
console.error(err);
return -1;
}
console.log(`Server listening to port ${PORT}`);
});
utils.js
'use strict';
const fs = require('fs');
const mime = require('mime');
module.exports = {
getFile: function (path) {
const filePath = `${__dirname}/public${path}`;
try {
const content = fs.openSync(filePath, 'r');
const contentType = mime.getType(filePath);
return {
content,
headers: {
'content-type': contentType
}
};
} catch (e) {
return null;
}
}
}
Updated 2020 01 28
Resolved: The reason is the latest version of chrome v65. has bug, cause client do not trust PUSH_PROMISE frame. I backup chrome v64 then it working now.
I haven’t tried to run your code but have noticed that Chrome does not allow HTTP/2 push with an untrusted HTTPS certificate (e.g. a self-signed one not yet added to the trust store). Raised a bug with the Chrome team.
If you have the red unsecure padlock item then you could be hitting this issue too. Add the certificate into your trust store, restart Chrome and reload the site where you should get a green padlock.
Note Chrome needs a certificate with a Subject Alternative Name (SAN) field matching the domain so if you’ve just got the older Subject field then it won’t go green even after adding it to your trust store.
Another option is to look at the Chrome HTTP2 frames by typing this into your URL:
chrome://net- internals/#http2
If you see the push promise frames (with a promised_stream_id), followed by the headers and data on that promised_stream_id then you know the sever side is working.

Resources